Test Report: KVM_Linux_crio 19360

                    
                      cd79d30fb13c14d30ca0dbfe151ef256c3a20136:2024-07-31:35589
                    
                

Test fail (30/323)

Order failed test Duration
43 TestAddons/parallel/Ingress 153.63
45 TestAddons/parallel/MetricsServer 356.45
54 TestAddons/StoppedEnableDisable 154.37
175 TestMultiControlPlane/serial/StopSecondaryNode 141.72
177 TestMultiControlPlane/serial/RestartSecondaryNode 52.81
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 392.11
182 TestMultiControlPlane/serial/StopCluster 141.67
242 TestMultiNode/serial/RestartKeepsNodes 324.72
244 TestMultiNode/serial/StopMultiNode 141.34
251 TestPreload 180.96
259 TestKubernetesUpgrade 359.81
286 TestPause/serial/SecondStartNoReconfiguration 79.92
295 TestStartStop/group/old-k8s-version/serial/FirstStart 296.73
303 TestStartStop/group/no-preload/serial/Stop 139.09
306 TestStartStop/group/embed-certs/serial/Stop 148.47
307 TestStartStop/group/old-k8s-version/serial/DeployApp 0.54
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 114.63
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 17.38
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.09
320 TestStartStop/group/old-k8s-version/serial/SecondStart 736.63
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
323 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.92
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.87
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.37
326 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.62
327 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.62
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 468.16
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 364.63
330 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 104.94
x
+
TestAddons/parallel/Ingress (153.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-877061 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-877061 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-877061 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7cb45e46-5ce9-4814-ac2f-70c117f17949] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7cb45e46-5ce9-4814-ac2f-70c117f17949] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.008230976s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-877061 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.433840331s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-877061 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.25
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-877061 addons disable ingress-dns --alsologtostderr -v=1: (1.671761948s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-877061 addons disable ingress --alsologtostderr -v=1: (7.689394239s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-877061 -n addons-877061
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-877061 logs -n 25: (1.154993674s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-408291                                                                     | download-only-408291 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| delete  | -p download-only-363533                                                                     | download-only-363533 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-782974 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | binary-mirror-782974                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40035                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-782974                                                                     | binary-mirror-782974 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| addons  | enable dashboard -p                                                                         | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | addons-877061                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | addons-877061                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-877061 --wait=true                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:11 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-877061 ssh cat                                                                       | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	|         | /opt/local-path-provisioner/pvc-dc514d6f-0e3d-4ea7-a5f8-6c9da90dff2a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:13 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-877061 ip                                                                            | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | -p addons-877061                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | addons-877061                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | addons-877061                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | -p addons-877061                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-877061 addons                                                                        | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-877061 ssh curl -s                                                                   | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-877061 addons                                                                        | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-877061 ip                                                                            | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:15 UTC | 31 Jul 24 20:15 UTC |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:15 UTC | 31 Jul 24 20:15 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:15 UTC | 31 Jul 24 20:15 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:09:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:09:41.662763 1101872 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:09:41.662874 1101872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:41.662884 1101872 out.go:304] Setting ErrFile to fd 2...
	I0731 20:09:41.662889 1101872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:41.663094 1101872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:09:41.663749 1101872 out.go:298] Setting JSON to false
	I0731 20:09:41.664864 1101872 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13933,"bootTime":1722442649,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:09:41.664925 1101872 start.go:139] virtualization: kvm guest
	I0731 20:09:41.667171 1101872 out.go:177] * [addons-877061] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:09:41.668577 1101872 notify.go:220] Checking for updates...
	I0731 20:09:41.668585 1101872 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:09:41.669954 1101872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:09:41.671237 1101872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:09:41.672530 1101872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:41.673731 1101872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:09:41.675012 1101872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:09:41.676313 1101872 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:09:41.708730 1101872 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 20:09:41.709984 1101872 start.go:297] selected driver: kvm2
	I0731 20:09:41.709996 1101872 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:09:41.710008 1101872 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:09:41.710755 1101872 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:09:41.710840 1101872 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:09:41.725856 1101872 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:09:41.725916 1101872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 20:09:41.726113 1101872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:09:41.726164 1101872 cni.go:84] Creating CNI manager for ""
	I0731 20:09:41.726172 1101872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:09:41.726180 1101872 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 20:09:41.726255 1101872 start.go:340] cluster config:
	{Name:addons-877061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:09:41.726353 1101872 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:09:41.728236 1101872 out.go:177] * Starting "addons-877061" primary control-plane node in "addons-877061" cluster
	I0731 20:09:41.729531 1101872 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:09:41.729574 1101872 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:09:41.729585 1101872 cache.go:56] Caching tarball of preloaded images
	I0731 20:09:41.729663 1101872 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:09:41.729674 1101872 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:09:41.729952 1101872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/config.json ...
	I0731 20:09:41.729970 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/config.json: {Name:mkd574fe00eb57092056af5a3f09f0afc5a84337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:09:41.730107 1101872 start.go:360] acquireMachinesLock for addons-877061: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:09:41.730153 1101872 start.go:364] duration metric: took 32.305µs to acquireMachinesLock for "addons-877061"
	I0731 20:09:41.730171 1101872 start.go:93] Provisioning new machine with config: &{Name:addons-877061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:09:41.730226 1101872 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 20:09:41.731857 1101872 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 20:09:41.732037 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:09:41.732108 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:09:41.747374 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0731 20:09:41.748130 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:09:41.748908 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:09:41.748943 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:09:41.749369 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:09:41.749589 1101872 main.go:141] libmachine: (addons-877061) Calling .GetMachineName
	I0731 20:09:41.749788 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:09:41.749951 1101872 start.go:159] libmachine.API.Create for "addons-877061" (driver="kvm2")
	I0731 20:09:41.749988 1101872 client.go:168] LocalClient.Create starting
	I0731 20:09:41.750036 1101872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 20:09:41.896487 1101872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 20:09:42.021690 1101872 main.go:141] libmachine: Running pre-create checks...
	I0731 20:09:42.021719 1101872 main.go:141] libmachine: (addons-877061) Calling .PreCreateCheck
	I0731 20:09:42.022258 1101872 main.go:141] libmachine: (addons-877061) Calling .GetConfigRaw
	I0731 20:09:42.022748 1101872 main.go:141] libmachine: Creating machine...
	I0731 20:09:42.022772 1101872 main.go:141] libmachine: (addons-877061) Calling .Create
	I0731 20:09:42.022957 1101872 main.go:141] libmachine: (addons-877061) Creating KVM machine...
	I0731 20:09:42.024328 1101872 main.go:141] libmachine: (addons-877061) DBG | found existing default KVM network
	I0731 20:09:42.025255 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.025078 1101894 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c30}
	I0731 20:09:42.025323 1101872 main.go:141] libmachine: (addons-877061) DBG | created network xml: 
	I0731 20:09:42.025353 1101872 main.go:141] libmachine: (addons-877061) DBG | <network>
	I0731 20:09:42.025366 1101872 main.go:141] libmachine: (addons-877061) DBG |   <name>mk-addons-877061</name>
	I0731 20:09:42.025376 1101872 main.go:141] libmachine: (addons-877061) DBG |   <dns enable='no'/>
	I0731 20:09:42.025382 1101872 main.go:141] libmachine: (addons-877061) DBG |   
	I0731 20:09:42.025391 1101872 main.go:141] libmachine: (addons-877061) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 20:09:42.025397 1101872 main.go:141] libmachine: (addons-877061) DBG |     <dhcp>
	I0731 20:09:42.025404 1101872 main.go:141] libmachine: (addons-877061) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 20:09:42.025409 1101872 main.go:141] libmachine: (addons-877061) DBG |     </dhcp>
	I0731 20:09:42.025417 1101872 main.go:141] libmachine: (addons-877061) DBG |   </ip>
	I0731 20:09:42.025422 1101872 main.go:141] libmachine: (addons-877061) DBG |   
	I0731 20:09:42.025429 1101872 main.go:141] libmachine: (addons-877061) DBG | </network>
	I0731 20:09:42.025444 1101872 main.go:141] libmachine: (addons-877061) DBG | 
	I0731 20:09:42.031118 1101872 main.go:141] libmachine: (addons-877061) DBG | trying to create private KVM network mk-addons-877061 192.168.39.0/24...
	I0731 20:09:42.096602 1101872 main.go:141] libmachine: (addons-877061) DBG | private KVM network mk-addons-877061 192.168.39.0/24 created
	I0731 20:09:42.096641 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.096547 1101894 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:42.096666 1101872 main.go:141] libmachine: (addons-877061) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061 ...
	I0731 20:09:42.096683 1101872 main.go:141] libmachine: (addons-877061) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:09:42.096799 1101872 main.go:141] libmachine: (addons-877061) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 20:09:42.363268 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.363125 1101894 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa...
	I0731 20:09:42.403134 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.403014 1101894 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/addons-877061.rawdisk...
	I0731 20:09:42.403169 1101872 main.go:141] libmachine: (addons-877061) DBG | Writing magic tar header
	I0731 20:09:42.403185 1101872 main.go:141] libmachine: (addons-877061) DBG | Writing SSH key tar header
	I0731 20:09:42.403278 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.403199 1101894 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061 ...
	I0731 20:09:42.403340 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061
	I0731 20:09:42.403363 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061 (perms=drwx------)
	I0731 20:09:42.403374 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 20:09:42.403386 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:09:42.403396 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:42.403410 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 20:09:42.403419 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:09:42.403430 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:09:42.403435 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home
	I0731 20:09:42.403445 1101872 main.go:141] libmachine: (addons-877061) DBG | Skipping /home - not owner
	I0731 20:09:42.403490 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 20:09:42.403512 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 20:09:42.403526 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:09:42.403535 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:09:42.403544 1101872 main.go:141] libmachine: (addons-877061) Creating domain...
	I0731 20:09:42.404648 1101872 main.go:141] libmachine: (addons-877061) define libvirt domain using xml: 
	I0731 20:09:42.404678 1101872 main.go:141] libmachine: (addons-877061) <domain type='kvm'>
	I0731 20:09:42.404686 1101872 main.go:141] libmachine: (addons-877061)   <name>addons-877061</name>
	I0731 20:09:42.404694 1101872 main.go:141] libmachine: (addons-877061)   <memory unit='MiB'>4000</memory>
	I0731 20:09:42.404702 1101872 main.go:141] libmachine: (addons-877061)   <vcpu>2</vcpu>
	I0731 20:09:42.404716 1101872 main.go:141] libmachine: (addons-877061)   <features>
	I0731 20:09:42.404744 1101872 main.go:141] libmachine: (addons-877061)     <acpi/>
	I0731 20:09:42.404767 1101872 main.go:141] libmachine: (addons-877061)     <apic/>
	I0731 20:09:42.404785 1101872 main.go:141] libmachine: (addons-877061)     <pae/>
	I0731 20:09:42.404799 1101872 main.go:141] libmachine: (addons-877061)     
	I0731 20:09:42.404815 1101872 main.go:141] libmachine: (addons-877061)   </features>
	I0731 20:09:42.404845 1101872 main.go:141] libmachine: (addons-877061)   <cpu mode='host-passthrough'>
	I0731 20:09:42.404872 1101872 main.go:141] libmachine: (addons-877061)   
	I0731 20:09:42.404886 1101872 main.go:141] libmachine: (addons-877061)   </cpu>
	I0731 20:09:42.404899 1101872 main.go:141] libmachine: (addons-877061)   <os>
	I0731 20:09:42.404916 1101872 main.go:141] libmachine: (addons-877061)     <type>hvm</type>
	I0731 20:09:42.404932 1101872 main.go:141] libmachine: (addons-877061)     <boot dev='cdrom'/>
	I0731 20:09:42.404950 1101872 main.go:141] libmachine: (addons-877061)     <boot dev='hd'/>
	I0731 20:09:42.404963 1101872 main.go:141] libmachine: (addons-877061)     <bootmenu enable='no'/>
	I0731 20:09:42.404973 1101872 main.go:141] libmachine: (addons-877061)   </os>
	I0731 20:09:42.404981 1101872 main.go:141] libmachine: (addons-877061)   <devices>
	I0731 20:09:42.404988 1101872 main.go:141] libmachine: (addons-877061)     <disk type='file' device='cdrom'>
	I0731 20:09:42.404998 1101872 main.go:141] libmachine: (addons-877061)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/boot2docker.iso'/>
	I0731 20:09:42.405006 1101872 main.go:141] libmachine: (addons-877061)       <target dev='hdc' bus='scsi'/>
	I0731 20:09:42.405013 1101872 main.go:141] libmachine: (addons-877061)       <readonly/>
	I0731 20:09:42.405018 1101872 main.go:141] libmachine: (addons-877061)     </disk>
	I0731 20:09:42.405026 1101872 main.go:141] libmachine: (addons-877061)     <disk type='file' device='disk'>
	I0731 20:09:42.405036 1101872 main.go:141] libmachine: (addons-877061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:09:42.405044 1101872 main.go:141] libmachine: (addons-877061)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/addons-877061.rawdisk'/>
	I0731 20:09:42.405053 1101872 main.go:141] libmachine: (addons-877061)       <target dev='hda' bus='virtio'/>
	I0731 20:09:42.405059 1101872 main.go:141] libmachine: (addons-877061)     </disk>
	I0731 20:09:42.405065 1101872 main.go:141] libmachine: (addons-877061)     <interface type='network'>
	I0731 20:09:42.405073 1101872 main.go:141] libmachine: (addons-877061)       <source network='mk-addons-877061'/>
	I0731 20:09:42.405085 1101872 main.go:141] libmachine: (addons-877061)       <model type='virtio'/>
	I0731 20:09:42.405092 1101872 main.go:141] libmachine: (addons-877061)     </interface>
	I0731 20:09:42.405097 1101872 main.go:141] libmachine: (addons-877061)     <interface type='network'>
	I0731 20:09:42.405105 1101872 main.go:141] libmachine: (addons-877061)       <source network='default'/>
	I0731 20:09:42.405110 1101872 main.go:141] libmachine: (addons-877061)       <model type='virtio'/>
	I0731 20:09:42.405120 1101872 main.go:141] libmachine: (addons-877061)     </interface>
	I0731 20:09:42.405133 1101872 main.go:141] libmachine: (addons-877061)     <serial type='pty'>
	I0731 20:09:42.405147 1101872 main.go:141] libmachine: (addons-877061)       <target port='0'/>
	I0731 20:09:42.405159 1101872 main.go:141] libmachine: (addons-877061)     </serial>
	I0731 20:09:42.405179 1101872 main.go:141] libmachine: (addons-877061)     <console type='pty'>
	I0731 20:09:42.405191 1101872 main.go:141] libmachine: (addons-877061)       <target type='serial' port='0'/>
	I0731 20:09:42.405201 1101872 main.go:141] libmachine: (addons-877061)     </console>
	I0731 20:09:42.405212 1101872 main.go:141] libmachine: (addons-877061)     <rng model='virtio'>
	I0731 20:09:42.405225 1101872 main.go:141] libmachine: (addons-877061)       <backend model='random'>/dev/random</backend>
	I0731 20:09:42.405235 1101872 main.go:141] libmachine: (addons-877061)     </rng>
	I0731 20:09:42.405245 1101872 main.go:141] libmachine: (addons-877061)     
	I0731 20:09:42.405256 1101872 main.go:141] libmachine: (addons-877061)     
	I0731 20:09:42.405266 1101872 main.go:141] libmachine: (addons-877061)   </devices>
	I0731 20:09:42.405277 1101872 main.go:141] libmachine: (addons-877061) </domain>
	I0731 20:09:42.405286 1101872 main.go:141] libmachine: (addons-877061) 
	I0731 20:09:42.411334 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:df:0f:9e in network default
	I0731 20:09:42.411928 1101872 main.go:141] libmachine: (addons-877061) Ensuring networks are active...
	I0731 20:09:42.411947 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:42.412594 1101872 main.go:141] libmachine: (addons-877061) Ensuring network default is active
	I0731 20:09:42.412904 1101872 main.go:141] libmachine: (addons-877061) Ensuring network mk-addons-877061 is active
	I0731 20:09:42.413406 1101872 main.go:141] libmachine: (addons-877061) Getting domain xml...
	I0731 20:09:42.414120 1101872 main.go:141] libmachine: (addons-877061) Creating domain...
	I0731 20:09:43.851922 1101872 main.go:141] libmachine: (addons-877061) Waiting to get IP...
	I0731 20:09:43.852863 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:43.853261 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:43.853285 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:43.853242 1101894 retry.go:31] will retry after 298.181213ms: waiting for machine to come up
	I0731 20:09:44.152997 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:44.153454 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:44.153491 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:44.153409 1101894 retry.go:31] will retry after 252.414928ms: waiting for machine to come up
	I0731 20:09:44.407994 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:44.408426 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:44.408457 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:44.408362 1101894 retry.go:31] will retry after 348.212309ms: waiting for machine to come up
	I0731 20:09:44.757936 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:44.758433 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:44.758458 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:44.758372 1101894 retry.go:31] will retry after 496.150607ms: waiting for machine to come up
	I0731 20:09:45.255934 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:45.256368 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:45.256391 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:45.256326 1101894 retry.go:31] will retry after 608.889823ms: waiting for machine to come up
	I0731 20:09:45.867608 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:45.868045 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:45.868074 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:45.867996 1101894 retry.go:31] will retry after 862.084322ms: waiting for machine to come up
	I0731 20:09:46.731956 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:46.732339 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:46.732373 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:46.732290 1101894 retry.go:31] will retry after 1.17249745s: waiting for machine to come up
	I0731 20:09:47.907191 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:47.907637 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:47.907669 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:47.907573 1101894 retry.go:31] will retry after 1.355826093s: waiting for machine to come up
	I0731 20:09:49.264747 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:49.265174 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:49.265206 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:49.265125 1101894 retry.go:31] will retry after 1.229798824s: waiting for machine to come up
	I0731 20:09:50.496596 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:50.497049 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:50.497083 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:50.496994 1101894 retry.go:31] will retry after 1.45034615s: waiting for machine to come up
	I0731 20:09:51.948563 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:51.949050 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:51.949083 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:51.949001 1101894 retry.go:31] will retry after 1.754586547s: waiting for machine to come up
	I0731 20:09:53.705998 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:53.706421 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:53.706534 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:53.706456 1101894 retry.go:31] will retry after 3.4501379s: waiting for machine to come up
	I0731 20:09:57.158577 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:57.159087 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:57.159112 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:57.158989 1101894 retry.go:31] will retry after 3.279487567s: waiting for machine to come up
	I0731 20:10:00.442593 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:00.442990 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:10:00.443015 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:10:00.442942 1101894 retry.go:31] will retry after 3.601297589s: waiting for machine to come up
	I0731 20:10:04.045584 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.046009 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has current primary IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.046029 1101872 main.go:141] libmachine: (addons-877061) Found IP for machine: 192.168.39.25
	I0731 20:10:04.046072 1101872 main.go:141] libmachine: (addons-877061) Reserving static IP address...
	I0731 20:10:04.046401 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find host DHCP lease matching {name: "addons-877061", mac: "52:54:00:2c:19:b6", ip: "192.168.39.25"} in network mk-addons-877061
	I0731 20:10:04.120261 1101872 main.go:141] libmachine: (addons-877061) DBG | Getting to WaitForSSH function...
	I0731 20:10:04.120294 1101872 main.go:141] libmachine: (addons-877061) Reserved static IP address: 192.168.39.25
	I0731 20:10:04.120307 1101872 main.go:141] libmachine: (addons-877061) Waiting for SSH to be available...
	I0731 20:10:04.122753 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.123163 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.123197 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.123298 1101872 main.go:141] libmachine: (addons-877061) DBG | Using SSH client type: external
	I0731 20:10:04.123323 1101872 main.go:141] libmachine: (addons-877061) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa (-rw-------)
	I0731 20:10:04.123354 1101872 main.go:141] libmachine: (addons-877061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:10:04.123371 1101872 main.go:141] libmachine: (addons-877061) DBG | About to run SSH command:
	I0731 20:10:04.123382 1101872 main.go:141] libmachine: (addons-877061) DBG | exit 0
	I0731 20:10:04.251947 1101872 main.go:141] libmachine: (addons-877061) DBG | SSH cmd err, output: <nil>: 
	I0731 20:10:04.252250 1101872 main.go:141] libmachine: (addons-877061) KVM machine creation complete!
	I0731 20:10:04.252561 1101872 main.go:141] libmachine: (addons-877061) Calling .GetConfigRaw
	I0731 20:10:04.253104 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:04.253294 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:04.253452 1101872 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:10:04.253468 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:04.254738 1101872 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:10:04.254754 1101872 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:10:04.254759 1101872 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:10:04.254766 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.256882 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.257190 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.257214 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.257411 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.257617 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.257779 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.257919 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.258093 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.258348 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.258366 1101872 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:10:04.367255 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:10:04.367285 1101872 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:10:04.367294 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.370014 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.370394 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.370419 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.370584 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.370790 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.370999 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.371187 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.371409 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.371596 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.371607 1101872 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:10:04.484156 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:10:04.484267 1101872 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:10:04.484284 1101872 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:10:04.484298 1101872 main.go:141] libmachine: (addons-877061) Calling .GetMachineName
	I0731 20:10:04.484605 1101872 buildroot.go:166] provisioning hostname "addons-877061"
	I0731 20:10:04.484634 1101872 main.go:141] libmachine: (addons-877061) Calling .GetMachineName
	I0731 20:10:04.484863 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.487182 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.487480 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.487509 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.487700 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.487916 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.488098 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.488242 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.488411 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.488630 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.488645 1101872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-877061 && echo "addons-877061" | sudo tee /etc/hostname
	I0731 20:10:04.612040 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-877061
	
	I0731 20:10:04.612079 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.614823 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.615240 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.615295 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.615452 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.615671 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.615861 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.616053 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.616252 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.616445 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.616461 1101872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-877061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-877061/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-877061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:10:04.735643 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:10:04.735674 1101872 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:10:04.735731 1101872 buildroot.go:174] setting up certificates
	I0731 20:10:04.735743 1101872 provision.go:84] configureAuth start
	I0731 20:10:04.735756 1101872 main.go:141] libmachine: (addons-877061) Calling .GetMachineName
	I0731 20:10:04.736028 1101872 main.go:141] libmachine: (addons-877061) Calling .GetIP
	I0731 20:10:04.738410 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.738738 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.738778 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.738910 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.740917 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.741230 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.741251 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.741395 1101872 provision.go:143] copyHostCerts
	I0731 20:10:04.741482 1101872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:10:04.741623 1101872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:10:04.741683 1101872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:10:04.741736 1101872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.addons-877061 san=[127.0.0.1 192.168.39.25 addons-877061 localhost minikube]
	I0731 20:10:04.841410 1101872 provision.go:177] copyRemoteCerts
	I0731 20:10:04.841474 1101872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:10:04.841503 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.843984 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.844311 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.844347 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.844481 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.844716 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.844875 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.845040 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:04.929228 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:10:04.950948 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 20:10:04.971769 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:10:04.992732 1101872 provision.go:87] duration metric: took 256.974803ms to configureAuth
	I0731 20:10:04.992759 1101872 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:10:04.992921 1101872 config.go:182] Loaded profile config "addons-877061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:10:04.993001 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.995547 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.995927 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.995954 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.996129 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.996351 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.996545 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.996663 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.996829 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.997012 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.997031 1101872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:10:05.262830 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:10:05.262887 1101872 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:10:05.262901 1101872 main.go:141] libmachine: (addons-877061) Calling .GetURL
	I0731 20:10:05.264296 1101872 main.go:141] libmachine: (addons-877061) DBG | Using libvirt version 6000000
	I0731 20:10:05.266672 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.267102 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.267131 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.267284 1101872 main.go:141] libmachine: Docker is up and running!
	I0731 20:10:05.267302 1101872 main.go:141] libmachine: Reticulating splines...
	I0731 20:10:05.267310 1101872 client.go:171] duration metric: took 23.517308382s to LocalClient.Create
	I0731 20:10:05.267336 1101872 start.go:167] duration metric: took 23.517385394s to libmachine.API.Create "addons-877061"
	I0731 20:10:05.267370 1101872 start.go:293] postStartSetup for "addons-877061" (driver="kvm2")
	I0731 20:10:05.267386 1101872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:10:05.267410 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.267698 1101872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:10:05.267726 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:05.270092 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.270402 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.270427 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.270528 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:05.270721 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.270905 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:05.271072 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:05.357644 1101872 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:10:05.361368 1101872 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:10:05.361397 1101872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:10:05.361475 1101872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:10:05.361501 1101872 start.go:296] duration metric: took 94.121882ms for postStartSetup
	I0731 20:10:05.361544 1101872 main.go:141] libmachine: (addons-877061) Calling .GetConfigRaw
	I0731 20:10:05.362194 1101872 main.go:141] libmachine: (addons-877061) Calling .GetIP
	I0731 20:10:05.364534 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.364915 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.364937 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.365168 1101872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/config.json ...
	I0731 20:10:05.365350 1101872 start.go:128] duration metric: took 23.635112572s to createHost
	I0731 20:10:05.365402 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:05.368056 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.368395 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.368435 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.368537 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:05.368754 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.368946 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.369058 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:05.369214 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:05.369425 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:05.369441 1101872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:10:05.480248 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722456605.458262123
	
	I0731 20:10:05.480273 1101872 fix.go:216] guest clock: 1722456605.458262123
	I0731 20:10:05.480281 1101872 fix.go:229] Guest: 2024-07-31 20:10:05.458262123 +0000 UTC Remote: 2024-07-31 20:10:05.365363546 +0000 UTC m=+23.736809928 (delta=92.898577ms)
	I0731 20:10:05.480336 1101872 fix.go:200] guest clock delta is within tolerance: 92.898577ms
	I0731 20:10:05.480347 1101872 start.go:83] releasing machines lock for "addons-877061", held for 23.750182454s
	I0731 20:10:05.480373 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.480677 1101872 main.go:141] libmachine: (addons-877061) Calling .GetIP
	I0731 20:10:05.483179 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.483497 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.483532 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.483725 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.484233 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.484465 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.484606 1101872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:10:05.484654 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:05.484703 1101872 ssh_runner.go:195] Run: cat /version.json
	I0731 20:10:05.484733 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:05.487061 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.487415 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.487447 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.487469 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.487719 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:05.487928 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.487937 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.487973 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.488074 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:05.488171 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:05.488262 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.488333 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:05.488377 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:05.488541 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:05.588468 1101872 ssh_runner.go:195] Run: systemctl --version
	I0731 20:10:05.593755 1101872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:10:05.748616 1101872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:10:05.754239 1101872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:10:05.754316 1101872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:10:05.768678 1101872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:10:05.768704 1101872 start.go:495] detecting cgroup driver to use...
	I0731 20:10:05.768772 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:10:05.784180 1101872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:10:05.797071 1101872 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:10:05.797121 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:10:05.809431 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:10:05.821709 1101872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:10:05.935050 1101872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:10:06.088122 1101872 docker.go:233] disabling docker service ...
	I0731 20:10:06.088194 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:10:06.102213 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:10:06.114209 1101872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:10:06.227528 1101872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:10:06.342502 1101872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:10:06.355909 1101872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:10:06.372427 1101872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:10:06.372504 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.382299 1101872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:10:06.382366 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.392003 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.401384 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.410875 1101872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:10:06.420523 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.430013 1101872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.445312 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.454998 1101872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:10:06.463829 1101872 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:10:06.463885 1101872 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:10:06.476194 1101872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:10:06.484963 1101872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:10:06.595215 1101872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:10:06.721069 1101872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:10:06.721169 1101872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:10:06.725362 1101872 start.go:563] Will wait 60s for crictl version
	I0731 20:10:06.725439 1101872 ssh_runner.go:195] Run: which crictl
	I0731 20:10:06.728681 1101872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:10:06.763238 1101872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:10:06.763367 1101872 ssh_runner.go:195] Run: crio --version
	I0731 20:10:06.788732 1101872 ssh_runner.go:195] Run: crio --version
	I0731 20:10:06.823043 1101872 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:10:06.824459 1101872 main.go:141] libmachine: (addons-877061) Calling .GetIP
	I0731 20:10:06.826944 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:06.827304 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:06.827335 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:06.827567 1101872 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:10:06.831392 1101872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:10:06.843382 1101872 kubeadm.go:883] updating cluster {Name:addons-877061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:10:06.843534 1101872 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:10:06.843595 1101872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:10:06.871904 1101872 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:10:06.871981 1101872 ssh_runner.go:195] Run: which lz4
	I0731 20:10:06.875535 1101872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:10:06.879154 1101872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:10:06.879188 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:10:08.065441 1101872 crio.go:462] duration metric: took 1.189930085s to copy over tarball
	I0731 20:10:08.065549 1101872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:10:10.234643 1101872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.169058277s)
	I0731 20:10:10.234676 1101872 crio.go:469] duration metric: took 2.169196058s to extract the tarball
	I0731 20:10:10.234687 1101872 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:10:10.271319 1101872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:10:10.309858 1101872 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:10:10.309889 1101872 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:10:10.309902 1101872 kubeadm.go:934] updating node { 192.168.39.25 8443 v1.30.3 crio true true} ...
	I0731 20:10:10.310041 1101872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-877061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:10:10.310132 1101872 ssh_runner.go:195] Run: crio config
	I0731 20:10:10.355459 1101872 cni.go:84] Creating CNI manager for ""
	I0731 20:10:10.355483 1101872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:10:10.355506 1101872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:10:10.355542 1101872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.25 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-877061 NodeName:addons-877061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:10:10.355718 1101872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-877061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:10:10.355812 1101872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:10:10.364927 1101872 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:10:10.365004 1101872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:10:10.373493 1101872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 20:10:10.388903 1101872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:10:10.403600 1101872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0731 20:10:10.418483 1101872 ssh_runner.go:195] Run: grep 192.168.39.25	control-plane.minikube.internal$ /etc/hosts
	I0731 20:10:10.422021 1101872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:10:10.432755 1101872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:10:10.545580 1101872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:10:10.561846 1101872 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061 for IP: 192.168.39.25
	I0731 20:10:10.561876 1101872 certs.go:194] generating shared ca certs ...
	I0731 20:10:10.561911 1101872 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.562105 1101872 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:10:10.608298 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt ...
	I0731 20:10:10.608330 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt: {Name:mk2ab08007953158416a03ea13176bac62a60120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.608526 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key ...
	I0731 20:10:10.608541 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key: {Name:mk996214a0f78812401e96bd781853b13ddbdc3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.608652 1101872 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:10:10.936721 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt ...
	I0731 20:10:10.936755 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt: {Name:mk355b96fd4550604698f58523265e11d1e33ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.936931 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key ...
	I0731 20:10:10.936942 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key: {Name:mkd3d94eb66d256de4785040cd6e2e932ccf8f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.937010 1101872 certs.go:256] generating profile certs ...
	I0731 20:10:10.937086 1101872 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.key
	I0731 20:10:10.937100 1101872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt with IP's: []
	I0731 20:10:11.069668 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt ...
	I0731 20:10:11.069699 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: {Name:mk1d4f549e753268fa7d38fab982a5df48bacdc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.069879 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.key ...
	I0731 20:10:11.069890 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.key: {Name:mk10b93e096972e82f2279fa4f9ced407e6fd21a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.069962 1101872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key.1c331ecc
	I0731 20:10:11.069980 1101872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt.1c331ecc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.25]
	I0731 20:10:11.503254 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt.1c331ecc ...
	I0731 20:10:11.503295 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt.1c331ecc: {Name:mkcb2470601fe2c34add3a88327863ce2693a403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.503486 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key.1c331ecc ...
	I0731 20:10:11.503501 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key.1c331ecc: {Name:mk98a8de165b4bf52d24b342e7677707c99a7698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.503575 1101872 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt.1c331ecc -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt
	I0731 20:10:11.503650 1101872 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key.1c331ecc -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key
	I0731 20:10:11.503697 1101872 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.key
	I0731 20:10:11.503716 1101872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.crt with IP's: []
	I0731 20:10:11.713642 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.crt ...
	I0731 20:10:11.713674 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.crt: {Name:mkeb5cf10009dd08cd5003aba20a9c24b8ff2be1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.713851 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.key ...
	I0731 20:10:11.713865 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.key: {Name:mk4cadf9987a7b4c2587b5bc22f415734c532f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.714039 1101872 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:10:11.714075 1101872 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:10:11.714100 1101872 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:10:11.714125 1101872 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:10:11.714740 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:10:11.738701 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:10:11.761522 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:10:11.783895 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:10:11.805016 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 20:10:11.825906 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:10:11.847927 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:10:11.869959 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:10:11.891542 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:10:11.914991 1101872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:10:11.933231 1101872 ssh_runner.go:195] Run: openssl version
	I0731 20:10:11.938889 1101872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:10:11.948972 1101872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:10:11.961416 1101872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:10:11.961495 1101872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:10:11.969427 1101872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:10:11.980808 1101872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:10:11.984724 1101872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:10:11.984792 1101872 kubeadm.go:392] StartCluster: {Name:addons-877061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:10:11.984900 1101872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:10:11.984982 1101872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:10:12.020864 1101872 cri.go:89] found id: ""
	I0731 20:10:12.020959 1101872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:10:12.030290 1101872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:10:12.039318 1101872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:10:12.048065 1101872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:10:12.048103 1101872 kubeadm.go:157] found existing configuration files:
	
	I0731 20:10:12.048159 1101872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:10:12.057645 1101872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:10:12.057709 1101872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:10:12.066595 1101872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:10:12.074729 1101872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:10:12.074786 1101872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:10:12.084237 1101872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:10:12.092807 1101872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:10:12.092877 1101872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:10:12.101705 1101872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:10:12.109998 1101872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:10:12.110052 1101872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:10:12.118532 1101872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 20:10:12.166515 1101872 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 20:10:12.166652 1101872 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 20:10:12.281406 1101872 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 20:10:12.281542 1101872 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 20:10:12.281691 1101872 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 20:10:12.470322 1101872 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 20:10:12.600577 1101872 out.go:204]   - Generating certificates and keys ...
	I0731 20:10:12.600746 1101872 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 20:10:12.600863 1101872 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 20:10:12.720304 1101872 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 20:10:12.867722 1101872 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 20:10:12.917204 1101872 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 20:10:13.172722 1101872 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 20:10:13.501957 1101872 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 20:10:13.502177 1101872 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-877061 localhost] and IPs [192.168.39.25 127.0.0.1 ::1]
	I0731 20:10:13.662307 1101872 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 20:10:13.662468 1101872 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-877061 localhost] and IPs [192.168.39.25 127.0.0.1 ::1]
	I0731 20:10:13.939212 1101872 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 20:10:14.057633 1101872 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 20:10:14.120202 1101872 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 20:10:14.120427 1101872 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 20:10:14.293872 1101872 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 20:10:14.364956 1101872 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 20:10:14.552445 1101872 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 20:10:14.706753 1101872 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 20:10:15.017164 1101872 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 20:10:15.017602 1101872 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 20:10:15.019765 1101872 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 20:10:15.021675 1101872 out.go:204]   - Booting up control plane ...
	I0731 20:10:15.021758 1101872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 20:10:15.021823 1101872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 20:10:15.021884 1101872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 20:10:15.051411 1101872 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 20:10:15.052470 1101872 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 20:10:15.052526 1101872 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 20:10:15.168516 1101872 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 20:10:15.168642 1101872 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 20:10:15.668357 1101872 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.789824ms
	I0731 20:10:15.668491 1101872 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 20:10:20.668211 1101872 kubeadm.go:310] [api-check] The API server is healthy after 5.002291697s
	I0731 20:10:20.681456 1101872 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 20:10:20.696925 1101872 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 20:10:20.724773 1101872 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 20:10:20.724950 1101872 kubeadm.go:310] [mark-control-plane] Marking the node addons-877061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 20:10:20.742563 1101872 kubeadm.go:310] [bootstrap-token] Using token: my6dzf.f6910kd3utos5wxr
	I0731 20:10:20.743904 1101872 out.go:204]   - Configuring RBAC rules ...
	I0731 20:10:20.744003 1101872 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 20:10:20.751418 1101872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 20:10:20.763084 1101872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 20:10:20.767018 1101872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 20:10:20.774050 1101872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 20:10:20.779355 1101872 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 20:10:21.074281 1101872 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 20:10:21.519525 1101872 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 20:10:22.074237 1101872 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 20:10:22.075074 1101872 kubeadm.go:310] 
	I0731 20:10:22.075137 1101872 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 20:10:22.075145 1101872 kubeadm.go:310] 
	I0731 20:10:22.075245 1101872 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 20:10:22.075259 1101872 kubeadm.go:310] 
	I0731 20:10:22.075284 1101872 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 20:10:22.075339 1101872 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 20:10:22.075399 1101872 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 20:10:22.075409 1101872 kubeadm.go:310] 
	I0731 20:10:22.075479 1101872 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 20:10:22.075489 1101872 kubeadm.go:310] 
	I0731 20:10:22.075547 1101872 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 20:10:22.075556 1101872 kubeadm.go:310] 
	I0731 20:10:22.075625 1101872 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 20:10:22.075713 1101872 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 20:10:22.075811 1101872 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 20:10:22.075821 1101872 kubeadm.go:310] 
	I0731 20:10:22.075933 1101872 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 20:10:22.076021 1101872 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 20:10:22.076027 1101872 kubeadm.go:310] 
	I0731 20:10:22.076114 1101872 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token my6dzf.f6910kd3utos5wxr \
	I0731 20:10:22.076255 1101872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 20:10:22.076276 1101872 kubeadm.go:310] 	--control-plane 
	I0731 20:10:22.076281 1101872 kubeadm.go:310] 
	I0731 20:10:22.076350 1101872 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 20:10:22.076388 1101872 kubeadm.go:310] 
	I0731 20:10:22.076474 1101872 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token my6dzf.f6910kd3utos5wxr \
	I0731 20:10:22.076585 1101872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 20:10:22.077153 1101872 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 20:10:22.077185 1101872 cni.go:84] Creating CNI manager for ""
	I0731 20:10:22.077196 1101872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:10:22.079155 1101872 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:10:22.080701 1101872 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:10:22.090791 1101872 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:10:22.107039 1101872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:10:22.107106 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:22.107152 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-877061 minikube.k8s.io/updated_at=2024_07_31T20_10_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=addons-877061 minikube.k8s.io/primary=true
	I0731 20:10:22.144838 1101872 ops.go:34] apiserver oom_adj: -16
	I0731 20:10:22.213504 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:22.713992 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:23.214085 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:23.713470 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:24.213082 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:24.713117 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:25.214070 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:25.713581 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:26.213066 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:26.714102 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:27.213707 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:27.713502 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:28.213359 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:28.713712 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:29.213844 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:29.713167 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:30.213993 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:30.714004 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:31.213100 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:31.713975 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:32.213565 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:32.713192 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:33.213948 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:33.713252 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:34.213394 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:34.714131 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:34.800756 1101872 kubeadm.go:1113] duration metric: took 12.693713349s to wait for elevateKubeSystemPrivileges
	I0731 20:10:34.800800 1101872 kubeadm.go:394] duration metric: took 22.816013892s to StartCluster
	I0731 20:10:34.800828 1101872 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:34.800997 1101872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:10:34.801388 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:34.801593 1101872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 20:10:34.801623 1101872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:10:34.801709 1101872 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 20:10:34.801833 1101872 addons.go:69] Setting helm-tiller=true in profile "addons-877061"
	I0731 20:10:34.801856 1101872 addons.go:69] Setting yakd=true in profile "addons-877061"
	I0731 20:10:34.801864 1101872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-877061"
	I0731 20:10:34.801891 1101872 addons.go:234] Setting addon helm-tiller=true in "addons-877061"
	I0731 20:10:34.801891 1101872 config.go:182] Loaded profile config "addons-877061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:10:34.801898 1101872 addons.go:69] Setting ingress=true in profile "addons-877061"
	I0731 20:10:34.801901 1101872 addons.go:69] Setting default-storageclass=true in profile "addons-877061"
	I0731 20:10:34.801915 1101872 addons.go:234] Setting addon ingress=true in "addons-877061"
	I0731 20:10:34.801890 1101872 addons.go:234] Setting addon yakd=true in "addons-877061"
	I0731 20:10:34.801949 1101872 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-877061"
	I0731 20:10:34.801955 1101872 addons.go:69] Setting registry=true in profile "addons-877061"
	I0731 20:10:34.801957 1101872 addons.go:69] Setting inspektor-gadget=true in profile "addons-877061"
	I0731 20:10:34.801962 1101872 addons.go:69] Setting storage-provisioner=true in profile "addons-877061"
	I0731 20:10:34.801975 1101872 addons.go:234] Setting addon registry=true in "addons-877061"
	I0731 20:10:34.802004 1101872 addons.go:234] Setting addon storage-provisioner=true in "addons-877061"
	I0731 20:10:34.802015 1101872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-877061"
	I0731 20:10:34.802025 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.801955 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.802047 1101872 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-877061"
	I0731 20:10:34.802069 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.801976 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.802028 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.801942 1101872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-877061"
	I0731 20:10:34.801942 1101872 addons.go:69] Setting gcp-auth=true in profile "addons-877061"
	I0731 20:10:34.802526 1101872 mustload.go:65] Loading cluster: addons-877061
	I0731 20:10:34.802528 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802550 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802582 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802583 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802596 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802605 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802615 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802616 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802624 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802665 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.801876 1101872 addons.go:69] Setting cloud-spanner=true in profile "addons-877061"
	I0731 20:10:34.801985 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.802693 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802700 1101872 config.go:182] Loaded profile config "addons-877061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:10:34.802712 1101872 addons.go:234] Setting addon cloud-spanner=true in "addons-877061"
	I0731 20:10:34.802560 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.801936 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.801988 1101872 addons.go:69] Setting metrics-server=true in profile "addons-877061"
	I0731 20:10:34.802793 1101872 addons.go:234] Setting addon metrics-server=true in "addons-877061"
	I0731 20:10:34.803030 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.801998 1101872 addons.go:69] Setting volumesnapshots=true in profile "addons-877061"
	I0731 20:10:34.802000 1101872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-877061"
	I0731 20:10:34.801977 1101872 addons.go:234] Setting addon inspektor-gadget=true in "addons-877061"
	I0731 20:10:34.803087 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.803145 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.803151 1101872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-877061"
	I0731 20:10:34.801998 1101872 addons.go:69] Setting volcano=true in profile "addons-877061"
	I0731 20:10:34.803567 1101872 addons.go:234] Setting addon volcano=true in "addons-877061"
	I0731 20:10:34.803601 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.803624 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.803657 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.803827 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.803853 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.804002 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.803107 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.804029 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.801985 1101872 addons.go:69] Setting ingress-dns=true in profile "addons-877061"
	I0731 20:10:34.804078 1101872 addons.go:234] Setting addon ingress-dns=true in "addons-877061"
	I0731 20:10:34.803117 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.804133 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.804451 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.804498 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.804502 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.804539 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.803119 1101872 addons.go:234] Setting addon volumesnapshots=true in "addons-877061"
	I0731 20:10:34.804705 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.803156 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.804775 1101872 out.go:177] * Verifying Kubernetes components...
	I0731 20:10:34.803092 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.804936 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.803183 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.808379 1101872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:10:34.824315 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0731 20:10:34.824436 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0731 20:10:34.824444 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0731 20:10:34.824510 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38357
	I0731 20:10:34.824992 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.825138 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.825152 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.825539 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.825775 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.825794 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.825794 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.825811 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.825927 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.825937 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.826137 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.826208 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0731 20:10:34.826448 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.826460 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.826536 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.826603 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.826624 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.826851 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.826965 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.827003 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.827015 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.827217 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.827264 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.827337 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.827408 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.827444 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.831524 1101872 addons.go:234] Setting addon default-storageclass=true in "addons-877061"
	I0731 20:10:34.831575 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.831961 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.832008 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.832509 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.832539 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.832644 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.832664 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.832935 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.832971 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.834483 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.834528 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.846149 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I0731 20:10:34.846152 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41175
	I0731 20:10:34.846637 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.846669 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.847062 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.847082 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.847549 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.847571 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.848637 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.849487 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.849528 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.851946 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I0731 20:10:34.852439 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.852608 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.853153 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.853176 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.853220 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.853254 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.854200 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0731 20:10:34.856424 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.857433 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.857452 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.858246 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.858561 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.861136 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.861767 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.861815 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.865927 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.868328 1101872 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 20:10:34.868922 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I0731 20:10:34.869464 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.869809 1101872 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 20:10:34.869830 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 20:10:34.869853 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.870209 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.870229 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.870313 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0731 20:10:34.870571 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.871408 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.871457 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.872753 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.873502 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.873520 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.873585 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.874064 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.874104 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.874191 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.874267 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
	I0731 20:10:34.874275 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.874460 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.874664 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.874751 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.874837 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.874886 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.875337 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.875354 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.875436 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0731 20:10:34.875985 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.876167 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.876316 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.877298 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.877318 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.877569 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0731 20:10:34.878198 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.878292 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.878791 1101872 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-877061"
	I0731 20:10:34.878842 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.879221 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.879261 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.879305 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.879321 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.879799 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.880029 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.880453 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.880494 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.880701 1101872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:10:34.882933 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0731 20:10:34.882950 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0731 20:10:34.883518 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.883547 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.883775 1101872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:10:34.883795 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:10:34.883814 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.884581 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.884597 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.885297 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.885464 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.887056 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.887078 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.887642 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.887972 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.888043 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.889133 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.889304 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0731 20:10:34.889572 1101872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 20:10:34.889596 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.889626 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.889790 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.889863 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.890114 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.890295 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.890438 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.890450 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.890654 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.890964 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.891506 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.891544 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.891633 1101872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 20:10:34.892741 1101872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 20:10:34.892766 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.892808 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.893933 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.894131 1101872 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 20:10:34.894147 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 20:10:34.894168 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.894302 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.894343 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.897603 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.898177 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.898209 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.898431 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.898764 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.898945 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.899089 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.899528 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0731 20:10:34.899964 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.900650 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.900670 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.901030 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.901571 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.901607 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.911387 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
	I0731 20:10:34.911972 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.912638 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.912660 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.913158 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.913846 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.913888 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.915604 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0731 20:10:34.916362 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.916996 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.917015 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.917744 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0731 20:10:34.918250 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.918847 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.918864 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.919261 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.919322 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
	I0731 20:10:34.920120 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.920160 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.920382 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.920948 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.920970 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.921037 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.921591 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.921670 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.922902 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0731 20:10:34.923053 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.924892 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.925024 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I0731 20:10:34.927120 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0731 20:10:34.927134 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.927259 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.927302 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.927315 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.927379 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I0731 20:10:34.928294 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.928322 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.928402 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.928465 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.928794 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:34.928808 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:34.928881 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.930243 1101872 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 20:10:34.930624 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I0731 20:10:34.930788 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.930893 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.930970 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.931020 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:34.931044 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:34.931052 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:34.931061 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:34.931068 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:34.931200 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.931250 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:34.931257 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 20:10:34.931351 1101872 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 20:10:34.931494 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.931567 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.931620 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 20:10:34.931635 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 20:10:34.931659 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.932442 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.932459 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.933222 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.933239 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.933350 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.933885 1101872 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 20:10:34.934036 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.934068 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.934539 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.934759 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.934819 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.934854 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I0731 20:10:34.936351 1101872 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 20:10:34.936690 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.936713 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.936756 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.936887 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.937425 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.937443 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.937511 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.937624 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.937693 1101872 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 20:10:34.937707 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 20:10:34.937726 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.937770 1101872 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:10:34.937798 1101872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:10:34.937816 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.938416 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.938433 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.938501 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.938556 1101872 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 20:10:34.938627 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.938930 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.938960 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.939243 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.940620 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0731 20:10:34.940736 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.941206 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.941254 1101872 out.go:177]   - Using image docker.io/busybox:stable
	I0731 20:10:34.941295 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.941502 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.941915 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.941931 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.942073 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.942220 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.942241 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0731 20:10:34.942242 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.942467 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.942581 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.942658 1101872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 20:10:34.942673 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 20:10:34.942692 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.942776 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.942942 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.942992 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.943026 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.943161 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.943476 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.943509 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.943814 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.943833 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.944352 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.944371 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.944622 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.944918 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.945114 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.945533 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.946178 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.946186 1101872 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 20:10:34.946395 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.946643 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.947552 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 20:10:34.947575 1101872 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 20:10:34.947593 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.947658 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.947670 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0731 20:10:34.948064 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.948081 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.948245 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 20:10:34.948312 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.948413 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.948591 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.948686 1101872 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 20:10:34.948760 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.949208 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.949229 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.949724 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0731 20:10:34.949728 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.949912 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.950288 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.950404 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 20:10:34.950594 1101872 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 20:10:34.950608 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 20:10:34.950624 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.950667 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.950679 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.950936 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.951019 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.952537 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.952633 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 20:10:34.953417 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.954127 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.954183 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.954603 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.954634 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.954776 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.954882 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.955094 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.955252 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 20:10:34.955259 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.955313 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.955336 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.955340 1101872 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 20:10:34.955500 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.955519 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.955988 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.956166 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.956270 1101872 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 20:10:34.956305 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.956911 1101872 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 20:10:34.956929 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 20:10:34.956948 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.957885 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 20:10:34.957896 1101872 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 20:10:34.957966 1101872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 20:10:34.957986 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.960103 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 20:10:34.960325 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.960748 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.960770 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.960942 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.961135 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.961327 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.961473 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.961482 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.961879 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.961903 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.962127 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.962295 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.962413 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 20:10:34.962481 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.962590 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.964624 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 20:10:34.965834 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 20:10:34.965855 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 20:10:34.965876 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.968669 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.968989 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.969009 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.969185 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.969372 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.969446 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43981
	I0731 20:10:34.969688 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.969816 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.970070 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.970593 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.970607 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.970951 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.971135 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.972807 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.974257 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 20:10:34.975585 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 20:10:34.975600 1101872 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 20:10:34.975614 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.978691 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.979058 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.979073 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.979223 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.979377 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.979499 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.979598 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.982910 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0731 20:10:35.000636 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:35.001235 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:35.001258 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:35.001695 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:35.001920 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:35.003800 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:35.006031 1101872 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 20:10:35.007407 1101872 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 20:10:35.007427 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 20:10:35.007445 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:35.010111 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:35.010623 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:35.010654 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:35.010849 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:35.011043 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:35.011239 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:35.011407 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:35.206025 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:10:35.275744 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 20:10:35.291417 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 20:10:35.307207 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:10:35.323006 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 20:10:35.362034 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 20:10:35.362068 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 20:10:35.375768 1101872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:10:35.375922 1101872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 20:10:35.378301 1101872 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 20:10:35.378324 1101872 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 20:10:35.390060 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 20:10:35.417560 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 20:10:35.436447 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 20:10:35.436474 1101872 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 20:10:35.470342 1101872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 20:10:35.470372 1101872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 20:10:35.494808 1101872 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 20:10:35.494834 1101872 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 20:10:35.506245 1101872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 20:10:35.506266 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 20:10:35.513379 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 20:10:35.513408 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 20:10:35.533202 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 20:10:35.533228 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 20:10:35.546389 1101872 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 20:10:35.546412 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 20:10:35.584171 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 20:10:35.584204 1101872 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 20:10:35.630596 1101872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 20:10:35.630627 1101872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 20:10:35.654055 1101872 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 20:10:35.654101 1101872 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 20:10:35.673251 1101872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 20:10:35.673283 1101872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 20:10:35.700468 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 20:10:35.700497 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 20:10:35.730896 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 20:10:35.730923 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 20:10:35.737789 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 20:10:35.772021 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 20:10:35.772058 1101872 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 20:10:35.797288 1101872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 20:10:35.797321 1101872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 20:10:35.819564 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 20:10:35.820861 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 20:10:35.820881 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 20:10:35.863043 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 20:10:35.863071 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 20:10:35.864240 1101872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:10:35.864280 1101872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 20:10:35.939626 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 20:10:35.939649 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 20:10:35.948393 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 20:10:35.948420 1101872 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 20:10:35.972239 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 20:10:35.972269 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 20:10:36.046755 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 20:10:36.046791 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 20:10:36.056899 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:10:36.132032 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 20:10:36.151365 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 20:10:36.151392 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 20:10:36.172186 1101872 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 20:10:36.172213 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 20:10:36.314542 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 20:10:36.314573 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 20:10:36.398708 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 20:10:36.398740 1101872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 20:10:36.437186 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 20:10:36.540606 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 20:10:36.540646 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 20:10:36.647151 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 20:10:36.647179 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 20:10:36.678061 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 20:10:36.678088 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 20:10:36.889673 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 20:10:36.889701 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 20:10:36.932109 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 20:10:37.074386 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 20:10:37.074426 1101872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 20:10:37.247024 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 20:10:39.233551 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.027483901s)
	I0731 20:10:39.233631 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.233644 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.233652 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.957871043s)
	I0731 20:10:39.233708 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.233725 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.234104 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.234107 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.234149 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.234108 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.234163 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.234174 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.234182 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.234193 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.234202 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.234206 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.234534 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.234608 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.234574 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.234582 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.236047 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.234585 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.320600 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.029107802s)
	I0731 20:10:39.320628 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.013372899s)
	I0731 20:10:39.320665 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.320677 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.320679 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.320692 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.320713 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.997673039s)
	I0731 20:10:39.320762 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.320778 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.320798 1101872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.944826611s)
	I0731 20:10:39.320819 1101872 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 20:10:39.320877 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.930786705s)
	I0731 20:10:39.320909 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.320919 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.321312 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.321335 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.321366 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.321372 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.321379 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.321386 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.321448 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.321454 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.321462 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.321468 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.321808 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.321851 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.321858 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.321866 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.321873 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.321935 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.321945 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.321953 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.321961 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.322100 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.322136 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.322152 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.322363 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.322404 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.322420 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.322997 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.323028 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.323034 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.323347 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.323364 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.324545 1101872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.945006583s)
	I0731 20:10:39.325504 1101872 node_ready.go:35] waiting up to 6m0s for node "addons-877061" to be "Ready" ...
	I0731 20:10:39.384515 1101872 node_ready.go:49] node "addons-877061" has status "Ready":"True"
	I0731 20:10:39.384542 1101872 node_ready.go:38] duration metric: took 59.010062ms for node "addons-877061" to be "Ready" ...
	I0731 20:10:39.384554 1101872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:10:39.441153 1101872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:39.472252 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.472281 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.472628 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.472652 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.472674 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	W0731 20:10:39.472780 1101872 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 20:10:39.486602 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.486633 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.486962 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.486990 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.487000 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.866253 1101872 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-877061" context rescaled to 1 replicas
	I0731 20:10:41.455134 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:41.957494 1101872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 20:10:41.957547 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:41.960881 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:41.961363 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:41.961397 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:41.961662 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:41.961991 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:41.962223 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:41.962422 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:42.226345 1101872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 20:10:42.288444 1101872 addons.go:234] Setting addon gcp-auth=true in "addons-877061"
	I0731 20:10:42.288512 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:42.288883 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:42.288922 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:42.306365 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I0731 20:10:42.306945 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:42.307581 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:42.307611 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:42.307981 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:42.308541 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:42.308574 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:42.323954 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45439
	I0731 20:10:42.324437 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:42.325001 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:42.325026 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:42.325410 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:42.325648 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:42.327415 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:42.327663 1101872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 20:10:42.327695 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:42.330313 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:42.330748 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:42.330788 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:42.330918 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:42.331117 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:42.331301 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:42.331471 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:42.733470 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.315869653s)
	I0731 20:10:42.733525 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733536 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.733601 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.995763645s)
	I0731 20:10:42.733656 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.914059162s)
	I0731 20:10:42.733656 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733723 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.733747 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.676816179s)
	I0731 20:10:42.733680 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733775 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733784 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.733806 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.733853 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.601780478s)
	I0731 20:10:42.733887 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733898 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.734119 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.734146 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.734157 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.734166 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.734426 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.297201841s)
	W0731 20:10:42.734463 1101872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 20:10:42.734502 1101872 retry.go:31] will retry after 177.691158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 20:10:42.734587 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.802444493s)
	I0731 20:10:42.734613 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.734622 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.734754 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.734768 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.734779 1101872 addons.go:475] Verifying addon ingress=true in "addons-877061"
	I0731 20:10:42.735032 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.735233 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.735257 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.735297 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.735318 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.735328 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.735628 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.735654 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.735660 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.735667 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.735675 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.735719 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.735738 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.735744 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.735753 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.735759 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.736571 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.736622 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.736642 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.736655 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.736664 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.736726 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.736750 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.736757 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.736766 1101872 addons.go:475] Verifying addon metrics-server=true in "addons-877061"
	I0731 20:10:42.737079 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.737134 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.737145 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.737563 1101872 out.go:177] * Verifying ingress addon...
	I0731 20:10:42.737751 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.737772 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.737774 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.737786 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.737800 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.737824 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.737838 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.737846 1101872 addons.go:475] Verifying addon registry=true in "addons-877061"
	I0731 20:10:42.738058 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.738128 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.738164 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.738177 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.738183 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.738191 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.739026 1101872 out.go:177] * Verifying registry addon...
	I0731 20:10:42.740182 1101872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 20:10:42.740291 1101872 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-877061 service yakd-dashboard -n yakd-dashboard
	
	I0731 20:10:42.741836 1101872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 20:10:42.748060 1101872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 20:10:42.748077 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:42.755534 1101872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 20:10:42.755554 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:42.912435 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 20:10:43.245506 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:43.246572 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:43.744451 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:43.750483 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:43.955296 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:44.253768 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.006670024s)
	I0731 20:10:44.253845 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:44.253868 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:44.253849 1101872 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.92615674s)
	I0731 20:10:44.254168 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:44.254180 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:44.254194 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:44.254212 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:44.254224 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:44.254464 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:44.254478 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:44.254489 1101872 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-877061"
	I0731 20:10:44.255658 1101872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 20:10:44.255664 1101872 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 20:10:44.257538 1101872 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 20:10:44.258294 1101872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 20:10:44.258728 1101872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 20:10:44.258744 1101872 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 20:10:44.266417 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:44.280755 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:44.291380 1101872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 20:10:44.291404 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:44.389125 1101872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 20:10:44.389152 1101872 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 20:10:44.447382 1101872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 20:10:44.447407 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 20:10:44.508995 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 20:10:44.672880 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.760377894s)
	I0731 20:10:44.672961 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:44.672977 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:44.673379 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:44.673400 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:44.673411 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:44.673419 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:44.673661 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:44.673680 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:44.748085 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:44.750160 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:44.769604 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:45.244752 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:45.253543 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:45.264369 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:45.648523 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.139477809s)
	I0731 20:10:45.648598 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:45.648616 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:45.648950 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:45.648987 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:45.649003 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:45.649011 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:45.649250 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:45.649292 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:45.649312 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:45.650778 1101872 addons.go:475] Verifying addon gcp-auth=true in "addons-877061"
	I0731 20:10:45.652756 1101872 out.go:177] * Verifying gcp-auth addon...
	I0731 20:10:45.654735 1101872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 20:10:45.667961 1101872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 20:10:45.667986 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:45.744884 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:45.750142 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:45.764475 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:46.158722 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:46.243598 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:46.246705 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:46.263545 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:46.451724 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:46.659319 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:46.746729 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:46.746852 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:46.767264 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:47.158978 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:47.245050 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:47.246785 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:47.461532 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:47.658468 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:47.744931 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:47.746257 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:47.764478 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:48.158171 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:48.247089 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:48.249236 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:48.264916 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:48.658976 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:48.746388 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:48.746713 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:48.764855 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:48.946965 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:49.158373 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:49.245160 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:49.246524 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:49.265453 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:49.700349 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:49.746807 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:49.747742 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:49.768811 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:50.157859 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:50.245658 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:50.248061 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:50.263860 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:50.658387 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:50.750863 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:50.751578 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:50.773202 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:50.948543 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:51.159634 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:51.244475 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:51.246721 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:51.263971 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:51.445434 1101872 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-fw2p8" not found
	I0731 20:10:51.445471 1101872 pod_ready.go:81] duration metric: took 12.004284155s for pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace to be "Ready" ...
	E0731 20:10:51.445487 1101872 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-fw2p8" not found
	I0731 20:10:51.445509 1101872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pjvjp" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.450598 1101872 pod_ready.go:92] pod "coredns-7db6d8ff4d-pjvjp" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.450624 1101872 pod_ready.go:81] duration metric: took 5.101582ms for pod "coredns-7db6d8ff4d-pjvjp" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.450634 1101872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.455976 1101872 pod_ready.go:92] pod "etcd-addons-877061" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.455998 1101872 pod_ready.go:81] duration metric: took 5.356211ms for pod "etcd-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.456007 1101872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.461238 1101872 pod_ready.go:92] pod "kube-apiserver-addons-877061" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.461258 1101872 pod_ready.go:81] duration metric: took 5.244109ms for pod "kube-apiserver-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.461269 1101872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.466666 1101872 pod_ready.go:92] pod "kube-controller-manager-addons-877061" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.466684 1101872 pod_ready.go:81] duration metric: took 5.409103ms for pod "kube-controller-manager-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.466695 1101872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h92bj" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.651562 1101872 pod_ready.go:92] pod "kube-proxy-h92bj" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.651593 1101872 pod_ready.go:81] duration metric: took 184.890923ms for pod "kube-proxy-h92bj" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.651608 1101872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.658127 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:51.745127 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:51.746805 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:51.764894 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:52.044940 1101872 pod_ready.go:92] pod "kube-scheduler-addons-877061" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:52.044970 1101872 pod_ready.go:81] duration metric: took 393.352713ms for pod "kube-scheduler-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:52.044984 1101872 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5kbf8" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:52.157999 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:52.245237 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:52.246347 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:52.263144 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:52.444558 1101872 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-5kbf8" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:52.444584 1101872 pod_ready.go:81] duration metric: took 399.592841ms for pod "nvidia-device-plugin-daemonset-5kbf8" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:52.444605 1101872 pod_ready.go:38] duration metric: took 13.060030069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:10:52.444623 1101872 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:10:52.444680 1101872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:10:52.461760 1101872 api_server.go:72] duration metric: took 17.660094129s to wait for apiserver process to appear ...
	I0731 20:10:52.461795 1101872 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:10:52.461834 1101872 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0731 20:10:52.466781 1101872 api_server.go:279] https://192.168.39.25:8443/healthz returned 200:
	ok
	I0731 20:10:52.467778 1101872 api_server.go:141] control plane version: v1.30.3
	I0731 20:10:52.467807 1101872 api_server.go:131] duration metric: took 6.005109ms to wait for apiserver health ...
	I0731 20:10:52.467817 1101872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:10:52.652496 1101872 system_pods.go:59] 18 kube-system pods found
	I0731 20:10:52.652532 1101872 system_pods.go:61] "coredns-7db6d8ff4d-pjvjp" [e01b9e3f-5d75-4f28-bef3-a1160ea25c49] Running
	I0731 20:10:52.652544 1101872 system_pods.go:61] "csi-hostpath-attacher-0" [faf92bf1-8436-4e5f-812b-2d8ee7be78f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 20:10:52.652555 1101872 system_pods.go:61] "csi-hostpath-resizer-0" [66792cd1-a930-47fe-aba7-0e628cbf832c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 20:10:52.652566 1101872 system_pods.go:61] "csi-hostpathplugin-w6w49" [85ac230e-8509-454a-a821-35db1c0791a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 20:10:52.652577 1101872 system_pods.go:61] "etcd-addons-877061" [c4d67bbf-58e0-4d6a-a64d-80504f04c202] Running
	I0731 20:10:52.652582 1101872 system_pods.go:61] "kube-apiserver-addons-877061" [88683965-c027-49db-b09d-ebcd761edde0] Running
	I0731 20:10:52.652585 1101872 system_pods.go:61] "kube-controller-manager-addons-877061" [2e12f940-8ee6-46c7-b124-b87c822a8116] Running
	I0731 20:10:52.652591 1101872 system_pods.go:61] "kube-ingress-dns-minikube" [df8e5ae0-bd13-4bca-a087-107e89be68cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0731 20:10:52.652598 1101872 system_pods.go:61] "kube-proxy-h92bj" [8dac7096-4089-4931-8b7d-506f46fa30aa] Running
	I0731 20:10:52.652602 1101872 system_pods.go:61] "kube-scheduler-addons-877061" [153883b8-84c7-48cc-a5ef-f0bc34d4fdb4] Running
	I0731 20:10:52.652610 1101872 system_pods.go:61] "metrics-server-c59844bb4-szt4w" [815a74e0-c39f-4673-8b08-290908785d21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:10:52.652613 1101872 system_pods.go:61] "nvidia-device-plugin-daemonset-5kbf8" [c837ef00-57b2-4111-8588-1b47358c0549] Running
	I0731 20:10:52.652621 1101872 system_pods.go:61] "registry-698f998955-pgf2q" [40e9667a-bd97-42a3-bb45-e40bc6e3b530] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 20:10:52.652627 1101872 system_pods.go:61] "registry-proxy-cdmns" [ec3040a1-3e1e-4ba3-9242-35e9ce417ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 20:10:52.652638 1101872 system_pods.go:61] "snapshot-controller-745499f584-2jq5t" [e85349e6-8af3-456c-b244-8e0916f824d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 20:10:52.652651 1101872 system_pods.go:61] "snapshot-controller-745499f584-kc6dc" [518b68fb-1e49-48af-862e-7907a32ba284] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 20:10:52.652661 1101872 system_pods.go:61] "storage-provisioner" [0edee967-79b7-490d-baf7-7412a25fc2c7] Running
	I0731 20:10:52.652672 1101872 system_pods.go:61] "tiller-deploy-6677d64bcd-7dwjf" [b2e84403-dfb7-4445-83e9-f9864386e974] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 20:10:52.652684 1101872 system_pods.go:74] duration metric: took 184.859472ms to wait for pod list to return data ...
	I0731 20:10:52.652695 1101872 default_sa.go:34] waiting for default service account to be created ...
	I0731 20:10:52.658249 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:52.747102 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:52.749236 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:52.771215 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:52.844988 1101872 default_sa.go:45] found service account: "default"
	I0731 20:10:52.845016 1101872 default_sa.go:55] duration metric: took 192.311468ms for default service account to be created ...
	I0731 20:10:52.845025 1101872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 20:10:53.050549 1101872 system_pods.go:86] 18 kube-system pods found
	I0731 20:10:53.050582 1101872 system_pods.go:89] "coredns-7db6d8ff4d-pjvjp" [e01b9e3f-5d75-4f28-bef3-a1160ea25c49] Running
	I0731 20:10:53.050591 1101872 system_pods.go:89] "csi-hostpath-attacher-0" [faf92bf1-8436-4e5f-812b-2d8ee7be78f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 20:10:53.050598 1101872 system_pods.go:89] "csi-hostpath-resizer-0" [66792cd1-a930-47fe-aba7-0e628cbf832c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 20:10:53.050607 1101872 system_pods.go:89] "csi-hostpathplugin-w6w49" [85ac230e-8509-454a-a821-35db1c0791a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 20:10:53.050613 1101872 system_pods.go:89] "etcd-addons-877061" [c4d67bbf-58e0-4d6a-a64d-80504f04c202] Running
	I0731 20:10:53.050617 1101872 system_pods.go:89] "kube-apiserver-addons-877061" [88683965-c027-49db-b09d-ebcd761edde0] Running
	I0731 20:10:53.050621 1101872 system_pods.go:89] "kube-controller-manager-addons-877061" [2e12f940-8ee6-46c7-b124-b87c822a8116] Running
	I0731 20:10:53.050628 1101872 system_pods.go:89] "kube-ingress-dns-minikube" [df8e5ae0-bd13-4bca-a087-107e89be68cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0731 20:10:53.050633 1101872 system_pods.go:89] "kube-proxy-h92bj" [8dac7096-4089-4931-8b7d-506f46fa30aa] Running
	I0731 20:10:53.050638 1101872 system_pods.go:89] "kube-scheduler-addons-877061" [153883b8-84c7-48cc-a5ef-f0bc34d4fdb4] Running
	I0731 20:10:53.050644 1101872 system_pods.go:89] "metrics-server-c59844bb4-szt4w" [815a74e0-c39f-4673-8b08-290908785d21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:10:53.050649 1101872 system_pods.go:89] "nvidia-device-plugin-daemonset-5kbf8" [c837ef00-57b2-4111-8588-1b47358c0549] Running
	I0731 20:10:53.050655 1101872 system_pods.go:89] "registry-698f998955-pgf2q" [40e9667a-bd97-42a3-bb45-e40bc6e3b530] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 20:10:53.050660 1101872 system_pods.go:89] "registry-proxy-cdmns" [ec3040a1-3e1e-4ba3-9242-35e9ce417ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 20:10:53.050667 1101872 system_pods.go:89] "snapshot-controller-745499f584-2jq5t" [e85349e6-8af3-456c-b244-8e0916f824d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 20:10:53.050677 1101872 system_pods.go:89] "snapshot-controller-745499f584-kc6dc" [518b68fb-1e49-48af-862e-7907a32ba284] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 20:10:53.050681 1101872 system_pods.go:89] "storage-provisioner" [0edee967-79b7-490d-baf7-7412a25fc2c7] Running
	I0731 20:10:53.050687 1101872 system_pods.go:89] "tiller-deploy-6677d64bcd-7dwjf" [b2e84403-dfb7-4445-83e9-f9864386e974] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 20:10:53.050695 1101872 system_pods.go:126] duration metric: took 205.66429ms to wait for k8s-apps to be running ...
	I0731 20:10:53.050706 1101872 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 20:10:53.050752 1101872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:10:53.066741 1101872 system_svc.go:56] duration metric: took 16.022805ms WaitForService to wait for kubelet
	I0731 20:10:53.066780 1101872 kubeadm.go:582] duration metric: took 18.265119683s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:10:53.066804 1101872 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:10:53.157895 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:53.246662 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:53.247302 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:53.247725 1101872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:10:53.247748 1101872 node_conditions.go:123] node cpu capacity is 2
	I0731 20:10:53.247763 1101872 node_conditions.go:105] duration metric: took 180.953959ms to run NodePressure ...
	I0731 20:10:53.247779 1101872 start.go:241] waiting for startup goroutines ...
	I0731 20:10:53.262910 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:53.658703 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:53.745084 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:53.747575 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:53.765186 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:54.159350 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:54.709079 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:54.709243 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:54.709308 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:54.709888 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:54.745860 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:54.748268 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:54.765368 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:55.158628 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:55.245176 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:55.246557 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:55.263713 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:55.658362 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:55.747102 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:55.748035 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:55.764364 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:56.158713 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:56.245690 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:56.247228 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:56.266792 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:56.658516 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:56.744609 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:56.746814 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:56.765277 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:57.158666 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:57.245210 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:57.246826 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:57.264558 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:57.658844 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:57.744218 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:57.746386 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:57.768341 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:58.159466 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:58.252030 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:58.252459 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:58.264844 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:58.658484 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:58.744971 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:58.746548 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:58.765057 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:59.158940 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:59.244736 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:59.246832 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:59.262782 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:59.658613 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:59.744940 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:59.748919 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:59.769095 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:00.158977 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:00.247229 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:00.248676 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:00.265327 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:00.658971 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:00.745493 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:00.747175 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:00.765222 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:01.159101 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:01.245461 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:01.246218 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:01.263358 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:01.657978 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:01.745009 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:01.746839 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:01.764380 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:02.158648 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:02.246041 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:02.251194 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:02.263938 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:02.658648 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:02.797576 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:02.798990 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:02.800106 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:03.158754 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:03.244260 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:03.246827 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:03.262711 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:03.658912 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:03.744768 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:03.747516 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:03.770008 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:04.160556 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:04.244399 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:04.246458 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:04.264415 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:04.658134 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:04.746388 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:04.746669 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:04.772380 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:05.158345 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:05.245649 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:05.246437 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:05.264035 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:05.658300 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:05.744380 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:05.746793 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:05.768018 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:06.158743 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:06.244715 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:06.246976 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:06.263085 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:06.658895 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:06.745592 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:06.747282 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:06.770162 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:07.158351 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:07.246583 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:07.247733 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:07.263202 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:07.658540 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:07.748471 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:07.749325 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:07.767870 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:08.445054 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:08.445167 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:08.445899 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:08.446920 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:08.658681 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:08.744764 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:08.746657 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:08.766543 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:09.158154 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:09.245958 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:09.250100 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:09.264457 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:09.658631 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:09.744909 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:09.747041 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:09.762623 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:10.158126 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:10.245410 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:10.246436 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:10.264870 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:10.658924 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:10.745635 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:10.746675 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:10.766778 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:11.159972 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:11.246622 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:11.247384 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:11.263808 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:11.658257 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:11.744290 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:11.746468 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:11.765242 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:12.158888 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:12.245373 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:12.247537 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:12.263756 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:12.658112 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:12.745734 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:12.750770 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:12.763626 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:13.158684 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:13.244122 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:13.246949 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:13.263516 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:13.658954 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:13.746088 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:13.753847 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:13.769643 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:14.158247 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:14.245402 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:14.246535 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:14.263502 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:14.919831 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:14.921368 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:14.922147 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:14.923196 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:15.158672 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:15.244616 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:15.247048 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:15.263162 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:15.659709 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:15.745781 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:15.748281 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:15.764581 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:16.159088 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:16.245283 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:16.246854 1101872 kapi.go:107] duration metric: took 33.505013889s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 20:11:16.262989 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:16.658771 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:16.744930 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:16.768242 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:17.158096 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:17.245004 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:17.262688 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:17.658517 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:17.745313 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:17.765059 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:18.160224 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:18.244784 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:18.265278 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:18.658255 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:18.744855 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:18.765406 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:19.158098 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:19.245965 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:19.263647 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:19.658252 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:19.744533 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:19.763094 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:20.158995 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:20.244904 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:20.264069 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:20.659222 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:20.745307 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:20.767942 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:21.158604 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:21.244421 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:21.263655 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:21.658538 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:21.744690 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:21.764000 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:22.158806 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:22.258155 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:22.269720 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:22.909957 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:22.915855 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:22.917536 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:23.158897 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:23.249153 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:23.263983 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:23.658757 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:23.744210 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:23.764220 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:24.160512 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:24.244097 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:24.262846 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:24.658921 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:24.745745 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:24.766442 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:25.159255 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:25.245610 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:25.263876 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:25.658684 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:25.744405 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:25.763470 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:26.158442 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:26.246425 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:26.264189 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:26.658759 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:26.744763 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:26.765084 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:27.158899 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:27.244431 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:27.274822 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:27.657937 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:27.745152 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:27.767096 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:28.159027 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:28.245005 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:28.263952 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:28.658881 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:28.744587 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:28.764379 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:29.157895 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:29.248182 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:29.262801 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:29.658708 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:29.744920 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:29.764698 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:30.159740 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:30.250916 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:30.269015 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:30.660585 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:30.746335 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:30.765947 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:31.158534 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:31.244162 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:31.262890 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:31.659437 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:31.744366 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:31.770665 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:32.158591 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:32.244284 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:32.263946 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:32.658547 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:32.743827 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:32.766552 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:33.158037 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:33.244831 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:33.263408 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:33.658607 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:33.744825 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:33.763684 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:34.158387 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:34.245009 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:34.263408 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:34.658450 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:34.745659 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:34.765843 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:35.204189 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:35.244703 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:35.263671 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:35.659068 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:35.744725 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:35.766094 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:36.158920 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:36.244751 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:36.263996 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:36.659513 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:36.744887 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:36.766165 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:37.192407 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:37.245421 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:37.263280 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:37.659293 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:37.745013 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:37.763793 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:38.159093 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:38.248245 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:38.263036 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:38.665053 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:38.745818 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:38.769380 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:39.159861 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:39.244926 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:39.264798 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:39.658225 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:39.745314 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:39.765626 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:40.159933 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:40.244731 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:40.263826 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:40.659640 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:40.744989 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:40.765420 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:41.159517 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:41.244065 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:41.262539 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:41.658448 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:41.744811 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:41.768138 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:42.159651 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:42.244291 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:42.263073 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:42.659134 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:42.744879 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:42.764581 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:43.159954 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:43.244512 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:43.263079 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:43.659648 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:43.744890 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:43.769236 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:44.158249 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:44.245098 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:44.264154 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:44.659971 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:44.744935 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:44.765397 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:45.160672 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:45.247699 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:45.266414 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:45.658826 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:45.745322 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:45.768121 1101872 kapi.go:107] duration metric: took 1m1.509824853s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 20:11:46.159408 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:46.245638 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:46.658655 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:46.746626 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:47.158377 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:47.245709 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:47.658682 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:47.744368 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:48.157871 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:48.244311 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:48.658290 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:48.745418 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:49.158111 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:49.245290 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:49.659849 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:49.744562 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:50.158902 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:50.244709 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:50.697406 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:50.744760 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:51.159232 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:51.244773 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:51.659495 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:51.744791 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:52.159073 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:52.245239 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:52.967179 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:52.967966 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:53.159247 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:53.246709 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:53.658426 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:53.745633 1101872 kapi.go:107] duration metric: took 1m11.005447524s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 20:11:54.159727 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:54.658129 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:55.159067 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:55.659619 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:56.159541 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:56.693859 1101872 kapi.go:107] duration metric: took 1m11.039120407s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 20:11:56.695355 1101872 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-877061 cluster.
	I0731 20:11:56.696942 1101872 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 20:11:56.698601 1101872 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 20:11:56.700451 1101872 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, default-storageclass, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0731 20:11:56.701672 1101872 addons.go:510] duration metric: took 1m21.899960186s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin default-storageclass metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0731 20:11:56.701726 1101872 start.go:246] waiting for cluster config update ...
	I0731 20:11:56.701756 1101872 start.go:255] writing updated cluster config ...
	I0731 20:11:56.702055 1101872 ssh_runner.go:195] Run: rm -f paused
	I0731 20:11:56.754324 1101872 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 20:11:56.756129 1101872 out.go:177] * Done! kubectl is now configured to use "addons-877061" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.560560184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=658f7895-ba47-41a2-8d65-27b41bcfb063 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.561051677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09a6e5d911e37129b64f9156885b88d1f74bfc162f1183f59e99b56674c4d9b,PodSandboxId:231072105c2fa6a82019e5beef2c007912e58abad8b5e3b42f72d902386bd825,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722456933762698501,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-fkk6w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fdcbce7-a259-4fcd-aef3-8ab54876051a,},Annotations:map[string]string{io.kubernetes.container.hash: a27159d6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ed79ece6434902e25e0ec74d2983b653c8008b8f4963044b0c61df50efc72e,PodSandboxId:78658f5b203508746498fb38e171d76c1a51ab6587fadbdfe40e19a236040b5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722456793308527427,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb45e46-5ce9-4814-ac2f-70c117f17949,},Annotations:map[string]string{io.kubernet
es.container.hash: e82199f1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7e18596e52895b4b30f06599b3d0223eb4034cc81d6dc9ef78bcd6c08e619b9,PodSandboxId:6dd423c8300e75f8577fcb52591267f9465fa670a59a7b9cb1d9ea4249e0066b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722456721990591463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3308b08-e08d-41c3-a
546-08165ed612db,},Annotations:map[string]string{io.kubernetes.container.hash: a25f05ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b487260b826cc5fc5a514d220b98c451b52113ca247a4756423a3beaf171809,PodSandboxId:53204b239762e51646c26fd16c9de4252e5c5cd96dc8e4ebe389769b195bc869,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722456692468303688,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-nrn9s,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 8721b649-1364-402b-a76e-027c2ad79a82,},Annotations:map[string]string{io.kubernetes.container.hash: b28e93b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7272caf407db1d7c296b2ec8e8a82c20ba6ec86e86c131747b2b24e756df5a2b,PodSandboxId:1adadc8477618021e39a781db80cf627ab9287a7471baf813a120f4163746dc0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722456692267925680,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q5924,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73db615e-01a6-4977-9e7b-977dc14e48d2,},Annotations:map[string]string{io.kubernetes.container.hash: 60c400cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70904c77dc12fd11a5e18da6a1fd199ddabc0e8aa0d260d2073b013f022f84a2,PodSandboxId:901d3cd76334c33e7d7f0f4ba4befd83b2a1aa92238e4916da02913cf2860bb7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722456680334060399,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-szt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815a74e0-c39f-4673-8b08-290908785d21,},Annotations:map[string]string{io.kubernetes.container.hash: c3cb6fe0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b2f64bdf439e9f995de453359ab78d94701db4529219fd02befc0f051f2484,PodSandboxId:16bd8b901e2caddc5136bbe6dd94f19b6307037f75c6636438bdba0a931a2610,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State
:CONTAINER_RUNNING,CreatedAt:1722456678075926870,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4xqvn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c5a7cab5-2791-46a3-9285-ab8d99473e07,},Annotations:map[string]string{io.kubernetes.container.hash: 8b36b4bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f,PodSandboxId:ca23fe91f8d40900e69a35db93e27f3766d0e8281f9e1557d8828a77865dc36b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456640921609594,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edee967-79b7-490d-baf7-7412a25fc2c7,},Annotations:map[string]string{io.kubernetes.container.hash: b4844227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9,PodSandboxId:e4cf51a462481ab57d0e23c5d3b39360f90563142685657a33f08f879a0c4483,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAI
NER_RUNNING,CreatedAt:1722456639463686470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pjvjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e01b9e3f-5d75-4f28-bef3-a1160ea25c49,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa3af44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59,PodSandboxId:46e85184ea23c9e00a3b3ec7bf10af2bc7fd092045a7e33e32892dd4247df3c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01
e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722456637793743117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h92bj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dac7096-4089-4931-8b7d-506f46fa30aa,},Annotations:map[string]string{io.kubernetes.container.hash: a9475b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d,PodSandboxId:0c9de8a2421446446692ce7f4c0139e3915bd2e7281444dcb0e2152b13c26c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2
d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722456616248535140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8545890710ff8e55235cd8b56c9bd130,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe7ee0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127,PodSandboxId:b1926e49ea823fad38d83b3385151f9142adcbf5e12c1b635db6d4db5e541f22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e1
5b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722456616233573652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdcb07f1ef63f01aa0b2ebd054db4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246,PodSandboxId:f705026c2f1a0966928f4fa4e02c98683ebbc8f1225bf04d58b84f8fe0b8e3eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722456616251188781,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f41a488273306fb4b2089e293226dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c848a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61,PodSandboxId:a0da43ad405589c9bbbdf18882ed6f963837caed5829dde3def79a0ca130d5ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722456616175720705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d9e7287c272d7d787f5206890a8f0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=658f7895-ba47-41a2-8d65-27b41bcfb063 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.594309201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2e3f1ac-165a-4a5c-950e-4f653a725c92 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.594390997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2e3f1ac-165a-4a5c-950e-4f653a725c92 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.595424248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2aee6c5-5566-47db-827d-4d598bdc0bc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.596893416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456941596866458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2aee6c5-5566-47db-827d-4d598bdc0bc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.597477133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc009d60-737e-46b9-a7e9-2282247dcaad name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.597532514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc009d60-737e-46b9-a7e9-2282247dcaad name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.598037966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09a6e5d911e37129b64f9156885b88d1f74bfc162f1183f59e99b56674c4d9b,PodSandboxId:231072105c2fa6a82019e5beef2c007912e58abad8b5e3b42f72d902386bd825,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722456933762698501,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-fkk6w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fdcbce7-a259-4fcd-aef3-8ab54876051a,},Annotations:map[string]string{io.kubernetes.container.hash: a27159d6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ed79ece6434902e25e0ec74d2983b653c8008b8f4963044b0c61df50efc72e,PodSandboxId:78658f5b203508746498fb38e171d76c1a51ab6587fadbdfe40e19a236040b5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722456793308527427,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb45e46-5ce9-4814-ac2f-70c117f17949,},Annotations:map[string]string{io.kubernet
es.container.hash: e82199f1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7e18596e52895b4b30f06599b3d0223eb4034cc81d6dc9ef78bcd6c08e619b9,PodSandboxId:6dd423c8300e75f8577fcb52591267f9465fa670a59a7b9cb1d9ea4249e0066b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722456721990591463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3308b08-e08d-41c3-a
546-08165ed612db,},Annotations:map[string]string{io.kubernetes.container.hash: a25f05ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b487260b826cc5fc5a514d220b98c451b52113ca247a4756423a3beaf171809,PodSandboxId:53204b239762e51646c26fd16c9de4252e5c5cd96dc8e4ebe389769b195bc869,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722456692468303688,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-nrn9s,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 8721b649-1364-402b-a76e-027c2ad79a82,},Annotations:map[string]string{io.kubernetes.container.hash: b28e93b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7272caf407db1d7c296b2ec8e8a82c20ba6ec86e86c131747b2b24e756df5a2b,PodSandboxId:1adadc8477618021e39a781db80cf627ab9287a7471baf813a120f4163746dc0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722456692267925680,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q5924,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73db615e-01a6-4977-9e7b-977dc14e48d2,},Annotations:map[string]string{io.kubernetes.container.hash: 60c400cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70904c77dc12fd11a5e18da6a1fd199ddabc0e8aa0d260d2073b013f022f84a2,PodSandboxId:901d3cd76334c33e7d7f0f4ba4befd83b2a1aa92238e4916da02913cf2860bb7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722456680334060399,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-szt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815a74e0-c39f-4673-8b08-290908785d21,},Annotations:map[string]string{io.kubernetes.container.hash: c3cb6fe0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b2f64bdf439e9f995de453359ab78d94701db4529219fd02befc0f051f2484,PodSandboxId:16bd8b901e2caddc5136bbe6dd94f19b6307037f75c6636438bdba0a931a2610,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State
:CONTAINER_RUNNING,CreatedAt:1722456678075926870,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4xqvn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c5a7cab5-2791-46a3-9285-ab8d99473e07,},Annotations:map[string]string{io.kubernetes.container.hash: 8b36b4bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f,PodSandboxId:ca23fe91f8d40900e69a35db93e27f3766d0e8281f9e1557d8828a77865dc36b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456640921609594,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edee967-79b7-490d-baf7-7412a25fc2c7,},Annotations:map[string]string{io.kubernetes.container.hash: b4844227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9,PodSandboxId:e4cf51a462481ab57d0e23c5d3b39360f90563142685657a33f08f879a0c4483,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAI
NER_RUNNING,CreatedAt:1722456639463686470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pjvjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e01b9e3f-5d75-4f28-bef3-a1160ea25c49,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa3af44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59,PodSandboxId:46e85184ea23c9e00a3b3ec7bf10af2bc7fd092045a7e33e32892dd4247df3c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01
e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722456637793743117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h92bj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dac7096-4089-4931-8b7d-506f46fa30aa,},Annotations:map[string]string{io.kubernetes.container.hash: a9475b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d,PodSandboxId:0c9de8a2421446446692ce7f4c0139e3915bd2e7281444dcb0e2152b13c26c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2
d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722456616248535140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8545890710ff8e55235cd8b56c9bd130,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe7ee0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127,PodSandboxId:b1926e49ea823fad38d83b3385151f9142adcbf5e12c1b635db6d4db5e541f22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e1
5b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722456616233573652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdcb07f1ef63f01aa0b2ebd054db4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246,PodSandboxId:f705026c2f1a0966928f4fa4e02c98683ebbc8f1225bf04d58b84f8fe0b8e3eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722456616251188781,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f41a488273306fb4b2089e293226dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c848a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61,PodSandboxId:a0da43ad405589c9bbbdf18882ed6f963837caed5829dde3def79a0ca130d5ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722456616175720705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d9e7287c272d7d787f5206890a8f0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc009d60-737e-46b9-a7e9-2282247dcaad name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.598479633Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=3f4e377e-5012-4de2-8160-156f0baffc1e name=/runtime.v1.RuntimeService/Status
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.598527125Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3f4e377e-5012-4de2-8160-156f0baffc1e name=/runtime.v1.RuntimeService/Status
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.631103435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d0d65a1-05b3-482a-9ed1-8ce7bbe6abc3 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.631180866Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d0d65a1-05b3-482a-9ed1-8ce7bbe6abc3 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.632044864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9aa3ae93-cec4-4a94-81cd-4392c427b4ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.633397979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456941633373752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9aa3ae93-cec4-4a94-81cd-4392c427b4ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.633924835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5a78fb1-5462-45fe-8f5c-592b570c303a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.633976870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5a78fb1-5462-45fe-8f5c-592b570c303a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.634347090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09a6e5d911e37129b64f9156885b88d1f74bfc162f1183f59e99b56674c4d9b,PodSandboxId:231072105c2fa6a82019e5beef2c007912e58abad8b5e3b42f72d902386bd825,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722456933762698501,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-fkk6w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fdcbce7-a259-4fcd-aef3-8ab54876051a,},Annotations:map[string]string{io.kubernetes.container.hash: a27159d6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ed79ece6434902e25e0ec74d2983b653c8008b8f4963044b0c61df50efc72e,PodSandboxId:78658f5b203508746498fb38e171d76c1a51ab6587fadbdfe40e19a236040b5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722456793308527427,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb45e46-5ce9-4814-ac2f-70c117f17949,},Annotations:map[string]string{io.kubernet
es.container.hash: e82199f1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7e18596e52895b4b30f06599b3d0223eb4034cc81d6dc9ef78bcd6c08e619b9,PodSandboxId:6dd423c8300e75f8577fcb52591267f9465fa670a59a7b9cb1d9ea4249e0066b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722456721990591463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3308b08-e08d-41c3-a
546-08165ed612db,},Annotations:map[string]string{io.kubernetes.container.hash: a25f05ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b487260b826cc5fc5a514d220b98c451b52113ca247a4756423a3beaf171809,PodSandboxId:53204b239762e51646c26fd16c9de4252e5c5cd96dc8e4ebe389769b195bc869,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722456692468303688,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-nrn9s,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 8721b649-1364-402b-a76e-027c2ad79a82,},Annotations:map[string]string{io.kubernetes.container.hash: b28e93b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7272caf407db1d7c296b2ec8e8a82c20ba6ec86e86c131747b2b24e756df5a2b,PodSandboxId:1adadc8477618021e39a781db80cf627ab9287a7471baf813a120f4163746dc0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722456692267925680,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q5924,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73db615e-01a6-4977-9e7b-977dc14e48d2,},Annotations:map[string]string{io.kubernetes.container.hash: 60c400cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70904c77dc12fd11a5e18da6a1fd199ddabc0e8aa0d260d2073b013f022f84a2,PodSandboxId:901d3cd76334c33e7d7f0f4ba4befd83b2a1aa92238e4916da02913cf2860bb7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722456680334060399,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-szt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815a74e0-c39f-4673-8b08-290908785d21,},Annotations:map[string]string{io.kubernetes.container.hash: c3cb6fe0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b2f64bdf439e9f995de453359ab78d94701db4529219fd02befc0f051f2484,PodSandboxId:16bd8b901e2caddc5136bbe6dd94f19b6307037f75c6636438bdba0a931a2610,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State
:CONTAINER_RUNNING,CreatedAt:1722456678075926870,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4xqvn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c5a7cab5-2791-46a3-9285-ab8d99473e07,},Annotations:map[string]string{io.kubernetes.container.hash: 8b36b4bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f,PodSandboxId:ca23fe91f8d40900e69a35db93e27f3766d0e8281f9e1557d8828a77865dc36b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456640921609594,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edee967-79b7-490d-baf7-7412a25fc2c7,},Annotations:map[string]string{io.kubernetes.container.hash: b4844227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9,PodSandboxId:e4cf51a462481ab57d0e23c5d3b39360f90563142685657a33f08f879a0c4483,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAI
NER_RUNNING,CreatedAt:1722456639463686470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pjvjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e01b9e3f-5d75-4f28-bef3-a1160ea25c49,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa3af44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59,PodSandboxId:46e85184ea23c9e00a3b3ec7bf10af2bc7fd092045a7e33e32892dd4247df3c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01
e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722456637793743117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h92bj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dac7096-4089-4931-8b7d-506f46fa30aa,},Annotations:map[string]string{io.kubernetes.container.hash: a9475b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d,PodSandboxId:0c9de8a2421446446692ce7f4c0139e3915bd2e7281444dcb0e2152b13c26c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2
d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722456616248535140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8545890710ff8e55235cd8b56c9bd130,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe7ee0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127,PodSandboxId:b1926e49ea823fad38d83b3385151f9142adcbf5e12c1b635db6d4db5e541f22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e1
5b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722456616233573652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdcb07f1ef63f01aa0b2ebd054db4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246,PodSandboxId:f705026c2f1a0966928f4fa4e02c98683ebbc8f1225bf04d58b84f8fe0b8e3eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722456616251188781,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f41a488273306fb4b2089e293226dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c848a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61,PodSandboxId:a0da43ad405589c9bbbdf18882ed6f963837caed5829dde3def79a0ca130d5ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722456616175720705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d9e7287c272d7d787f5206890a8f0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5a78fb1-5462-45fe-8f5c-592b570c303a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.665813178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afbe98a8-f72e-464b-9eec-6370e2c85351 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.665940987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afbe98a8-f72e-464b-9eec-6370e2c85351 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.667317841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78ecb1d8-4992-423f-8529-84a0d097eb45 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.668786471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456941668756872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78ecb1d8-4992-423f-8529-84a0d097eb45 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.669533876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=731f586c-d8c5-4562-9af2-dc59a06d5c49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.669599480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=731f586c-d8c5-4562-9af2-dc59a06d5c49 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:15:41 addons-877061 crio[680]: time="2024-07-31 20:15:41.670039298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09a6e5d911e37129b64f9156885b88d1f74bfc162f1183f59e99b56674c4d9b,PodSandboxId:231072105c2fa6a82019e5beef2c007912e58abad8b5e3b42f72d902386bd825,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722456933762698501,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-fkk6w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fdcbce7-a259-4fcd-aef3-8ab54876051a,},Annotations:map[string]string{io.kubernetes.container.hash: a27159d6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ed79ece6434902e25e0ec74d2983b653c8008b8f4963044b0c61df50efc72e,PodSandboxId:78658f5b203508746498fb38e171d76c1a51ab6587fadbdfe40e19a236040b5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722456793308527427,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb45e46-5ce9-4814-ac2f-70c117f17949,},Annotations:map[string]string{io.kubernet
es.container.hash: e82199f1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7e18596e52895b4b30f06599b3d0223eb4034cc81d6dc9ef78bcd6c08e619b9,PodSandboxId:6dd423c8300e75f8577fcb52591267f9465fa670a59a7b9cb1d9ea4249e0066b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722456721990591463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3308b08-e08d-41c3-a
546-08165ed612db,},Annotations:map[string]string{io.kubernetes.container.hash: a25f05ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b487260b826cc5fc5a514d220b98c451b52113ca247a4756423a3beaf171809,PodSandboxId:53204b239762e51646c26fd16c9de4252e5c5cd96dc8e4ebe389769b195bc869,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722456692468303688,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-nrn9s,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 8721b649-1364-402b-a76e-027c2ad79a82,},Annotations:map[string]string{io.kubernetes.container.hash: b28e93b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7272caf407db1d7c296b2ec8e8a82c20ba6ec86e86c131747b2b24e756df5a2b,PodSandboxId:1adadc8477618021e39a781db80cf627ab9287a7471baf813a120f4163746dc0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722456692267925680,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q5924,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 73db615e-01a6-4977-9e7b-977dc14e48d2,},Annotations:map[string]string{io.kubernetes.container.hash: 60c400cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70904c77dc12fd11a5e18da6a1fd199ddabc0e8aa0d260d2073b013f022f84a2,PodSandboxId:901d3cd76334c33e7d7f0f4ba4befd83b2a1aa92238e4916da02913cf2860bb7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722456680334060399,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-szt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815a74e0-c39f-4673-8b08-290908785d21,},Annotations:map[string]string{io.kubernetes.container.hash: c3cb6fe0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b2f64bdf439e9f995de453359ab78d94701db4529219fd02befc0f051f2484,PodSandboxId:16bd8b901e2caddc5136bbe6dd94f19b6307037f75c6636438bdba0a931a2610,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State
:CONTAINER_RUNNING,CreatedAt:1722456678075926870,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4xqvn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c5a7cab5-2791-46a3-9285-ab8d99473e07,},Annotations:map[string]string{io.kubernetes.container.hash: 8b36b4bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f,PodSandboxId:ca23fe91f8d40900e69a35db93e27f3766d0e8281f9e1557d8828a77865dc36b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fb
da1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456640921609594,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edee967-79b7-490d-baf7-7412a25fc2c7,},Annotations:map[string]string{io.kubernetes.container.hash: b4844227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9,PodSandboxId:e4cf51a462481ab57d0e23c5d3b39360f90563142685657a33f08f879a0c4483,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAI
NER_RUNNING,CreatedAt:1722456639463686470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pjvjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e01b9e3f-5d75-4f28-bef3-a1160ea25c49,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa3af44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59,PodSandboxId:46e85184ea23c9e00a3b3ec7bf10af2bc7fd092045a7e33e32892dd4247df3c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01
e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722456637793743117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h92bj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dac7096-4089-4931-8b7d-506f46fa30aa,},Annotations:map[string]string{io.kubernetes.container.hash: a9475b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d,PodSandboxId:0c9de8a2421446446692ce7f4c0139e3915bd2e7281444dcb0e2152b13c26c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2
d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722456616248535140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8545890710ff8e55235cd8b56c9bd130,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe7ee0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127,PodSandboxId:b1926e49ea823fad38d83b3385151f9142adcbf5e12c1b635db6d4db5e541f22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e1
5b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722456616233573652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdcb07f1ef63f01aa0b2ebd054db4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246,PodSandboxId:f705026c2f1a0966928f4fa4e02c98683ebbc8f1225bf04d58b84f8fe0b8e3eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722456616251188781,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f41a488273306fb4b2089e293226dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c848a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61,PodSandboxId:a0da43ad405589c9bbbdf18882ed6f963837caed5829dde3def79a0ca130d5ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722456616175720705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d9e7287c272d7d787f5206890a8f0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=731f586c-d8c5-4562-9af2-dc59a06d5c49 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f09a6e5d911e3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   231072105c2fa       hello-world-app-6778b5fc9f-fkk6w
	27ed79ece6434       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   78658f5b20350       nginx
	f7e18596e5289       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6dd423c8300e7       busybox
	8b487260b826c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              patch                     0                   53204b239762e       ingress-nginx-admission-patch-nrn9s
	7272caf407db1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   1adadc8477618       ingress-nginx-admission-create-q5924
	70904c77dc12f       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   901d3cd76334c       metrics-server-c59844bb4-szt4w
	07b2f64bdf439       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   16bd8b901e2ca       local-path-provisioner-8d985888d-4xqvn
	903f06add0040       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   ca23fe91f8d40       storage-provisioner
	8a5f6a95d494c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   e4cf51a462481       coredns-7db6d8ff4d-pjvjp
	904008fcac960       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             5 minutes ago       Running             kube-proxy                0                   46e85184ea23c       kube-proxy-h92bj
	63b7ef3dfd3ef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   f705026c2f1a0       etcd-addons-877061
	dd30ab8ea22e5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             5 minutes ago       Running             kube-apiserver            0                   0c9de8a242144       kube-apiserver-addons-877061
	890c7aa8247d6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             5 minutes ago       Running             kube-controller-manager   0                   b1926e49ea823       kube-controller-manager-addons-877061
	eab1dd8098cb3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             5 minutes ago       Running             kube-scheduler            0                   a0da43ad40558       kube-scheduler-addons-877061
	
	
	==> coredns [8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52803 - 45331 "HINFO IN 8583045780429383597.6288047007275923205. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017922286s
	[INFO] 10.244.0.22:50348 - 51016 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000290184s
	[INFO] 10.244.0.22:55545 - 31557 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108354s
	[INFO] 10.244.0.22:57511 - 14748 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127398s
	[INFO] 10.244.0.22:46889 - 64014 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00005389s
	[INFO] 10.244.0.22:52931 - 61483 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089473s
	[INFO] 10.244.0.22:59676 - 28964 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090484s
	[INFO] 10.244.0.22:41086 - 48020 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.0006069s
	[INFO] 10.244.0.22:37056 - 33957 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000907384s
	[INFO] 10.244.0.27:55542 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000331615s
	[INFO] 10.244.0.27:51183 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000150636s
	
	
	==> describe nodes <==
	Name:               addons-877061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-877061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=addons-877061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_10_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-877061
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-877061
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:15:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:13:56 +0000   Wed, 31 Jul 2024 20:10:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:13:56 +0000   Wed, 31 Jul 2024 20:10:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:13:56 +0000   Wed, 31 Jul 2024 20:10:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:13:56 +0000   Wed, 31 Jul 2024 20:10:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.25
	  Hostname:    addons-877061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 28529e108a6949f1a8866ba1ce22684c
	  System UUID:                28529e10-8a69-49f1-a886-6ba1ce22684c
	  Boot ID:                    dba12f0b-0959-4974-9125-d040b0981d4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  default                     hello-world-app-6778b5fc9f-fkk6w          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-7db6d8ff4d-pjvjp                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m6s
	  kube-system                 etcd-addons-877061                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-apiserver-addons-877061              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-controller-manager-addons-877061     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-proxy-h92bj                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-addons-877061              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 metrics-server-c59844bb4-szt4w            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m2s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  local-path-storage          local-path-provisioner-8d985888d-4xqvn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node addons-877061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node addons-877061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node addons-877061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m20s                  kubelet          Node addons-877061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s                  kubelet          Node addons-877061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s                  kubelet          Node addons-877061 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m19s                  kubelet          Node addons-877061 status is now: NodeReady
	  Normal  RegisteredNode           5m7s                   node-controller  Node addons-877061 event: Registered Node addons-877061 in Controller
	
	
	==> dmesg <==
	[  +5.549922] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.859458] kauditd_printk_skb: 5 callbacks suppressed
	[Jul31 20:11] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.459873] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.453968] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.373969] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.477658] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.431468] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.874397] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.481746] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.014635] kauditd_printk_skb: 41 callbacks suppressed
	[Jul31 20:12] kauditd_printk_skb: 13 callbacks suppressed
	[ +25.785959] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.177692] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.982729] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.105439] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.185905] kauditd_printk_skb: 27 callbacks suppressed
	[Jul31 20:13] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.798852] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.116988] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.663041] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.834836] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.790424] kauditd_printk_skb: 19 callbacks suppressed
	[Jul31 20:15] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.542635] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246] <==
	{"level":"warn","ts":"2024-07-31T20:11:50.677846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:11:50.32471Z","time spent":"353.048051ms","remote":"127.0.0.1:46244","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1147 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-07-31T20:11:50.678157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.504806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.25\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-31T20:11:50.678245Z","caller":"traceutil/trace.go:171","msg":"trace[108960031] range","detail":"{range_begin:/registry/masterleases/192.168.39.25; range_end:; response_count:1; response_revision:1156; }","duration":"336.614616ms","start":"2024-07-31T20:11:50.341622Z","end":"2024-07-31T20:11:50.678237Z","steps":["trace[108960031] 'agreement among raft nodes before linearized reading'  (duration: 336.438455ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:50.678337Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:11:50.341609Z","time spent":"336.719525ms","remote":"127.0.0.1:46018","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.39.25\" "}
	{"level":"warn","ts":"2024-07-31T20:11:50.679082Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.066298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-szt4w.17e76539be0ff077\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-07-31T20:11:50.679276Z","caller":"traceutil/trace.go:171","msg":"trace[1168345062] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-szt4w.17e76539be0ff077; range_end:; response_count:1; response_revision:1156; }","duration":"326.282216ms","start":"2024-07-31T20:11:50.352984Z","end":"2024-07-31T20:11:50.679266Z","steps":["trace[1168345062] 'agreement among raft nodes before linearized reading'  (duration: 325.753926ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:50.679399Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:11:50.352972Z","time spent":"326.345454ms","remote":"127.0.0.1:46050","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":836,"request content":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-szt4w.17e76539be0ff077\" "}
	{"level":"warn","ts":"2024-07-31T20:11:52.951749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.489434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T20:11:52.951812Z","caller":"traceutil/trace.go:171","msg":"trace[1233893196] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1162; }","duration":"187.579579ms","start":"2024-07-31T20:11:52.76422Z","end":"2024-07-31T20:11:52.9518Z","steps":["trace[1233893196] 'range keys from in-memory index tree'  (duration: 187.442425ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:52.952106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.732675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14357"}
	{"level":"info","ts":"2024-07-31T20:11:52.95296Z","caller":"traceutil/trace.go:171","msg":"trace[196402797] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1162; }","duration":"221.623878ms","start":"2024-07-31T20:11:52.731324Z","end":"2024-07-31T20:11:52.952948Z","steps":["trace[196402797] 'range keys from in-memory index tree'  (duration: 220.627812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:52.952121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.231494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11447"}
	{"level":"info","ts":"2024-07-31T20:11:52.953172Z","caller":"traceutil/trace.go:171","msg":"trace[1408912019] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1162; }","duration":"306.312141ms","start":"2024-07-31T20:11:52.646851Z","end":"2024-07-31T20:11:52.953163Z","steps":["trace[1408912019] 'range keys from in-memory index tree'  (duration: 305.140525ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:52.953236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:11:52.646809Z","time spent":"306.41817ms","remote":"127.0.0.1:46156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11470,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-31T20:11:56.675581Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.998746ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4688406686009807783 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/gcp-auth/gcp-auth-skpqf\" mod_revision:843 > success:<request_put:<key:\"/registry/endpointslices/gcp-auth/gcp-auth-skpqf\" value_size:1034 >> failure:<request_range:<key:\"/registry/endpointslices/gcp-auth/gcp-auth-skpqf\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:11:56.675744Z","caller":"traceutil/trace.go:171","msg":"trace[267995728] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1232; }","duration":"219.182039ms","start":"2024-07-31T20:11:56.456553Z","end":"2024-07-31T20:11:56.675735Z","steps":["trace[267995728] 'read index received'  (duration: 24.95201ms)","trace[267995728] 'applied index is now lower than readState.Index'  (duration: 194.22941ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T20:11:56.675959Z","caller":"traceutil/trace.go:171","msg":"trace[1281828029] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"241.86725ms","start":"2024-07-31T20:11:56.43408Z","end":"2024-07-31T20:11:56.675947Z","steps":["trace[1281828029] 'process raft request'  (duration: 47.369867ms)","trace[1281828029] 'compare'  (duration: 193.930581ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T20:11:56.676149Z","caller":"traceutil/trace.go:171","msg":"trace[317476393] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"241.123472ms","start":"2024-07-31T20:11:56.43502Z","end":"2024-07-31T20:11:56.676143Z","steps":["trace[317476393] 'process raft request'  (duration: 240.643621ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:11:56.676289Z","caller":"traceutil/trace.go:171","msg":"trace[740292274] transaction","detail":"{read_only:false; response_revision:1200; number_of_response:1; }","duration":"240.328144ms","start":"2024-07-31T20:11:56.435955Z","end":"2024-07-31T20:11:56.676283Z","steps":["trace[740292274] 'process raft request'  (duration: 239.753437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:56.67647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.906632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-31T20:11:56.67651Z","caller":"traceutil/trace.go:171","msg":"trace[2035162202] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1200; }","duration":"219.981002ms","start":"2024-07-31T20:11:56.45652Z","end":"2024-07-31T20:11:56.676502Z","steps":["trace[2035162202] 'agreement among raft nodes before linearized reading'  (duration: 219.842754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:56.676617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.551871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/yakd-dashboard/\" range_end:\"/registry/secrets/yakd-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T20:11:56.676649Z","caller":"traceutil/trace.go:171","msg":"trace[545984218] range","detail":"{range_begin:/registry/secrets/yakd-dashboard/; range_end:/registry/secrets/yakd-dashboard0; response_count:0; response_revision:1200; }","duration":"204.612811ms","start":"2024-07-31T20:11:56.47203Z","end":"2024-07-31T20:11:56.676643Z","steps":["trace[545984218] 'agreement among raft nodes before linearized reading'  (duration: 204.56753ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:12:53.770984Z","caller":"traceutil/trace.go:171","msg":"trace[1911453504] transaction","detail":"{read_only:false; response_revision:1459; number_of_response:1; }","duration":"210.416478ms","start":"2024-07-31T20:12:53.560546Z","end":"2024-07-31T20:12:53.770963Z","steps":["trace[1911453504] 'process raft request'  (duration: 210.23962ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:12:59.552557Z","caller":"traceutil/trace.go:171","msg":"trace[2010892642] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"164.889858ms","start":"2024-07-31T20:12:59.38765Z","end":"2024-07-31T20:12:59.55254Z","steps":["trace[2010892642] 'process raft request'  (duration: 164.790117ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:15:42 up 5 min,  0 users,  load average: 0.30, 0.77, 0.44
	Linux addons-877061 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d] <==
	W0731 20:12:35.393009       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 20:12:35.393152       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0731 20:12:35.393066       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.249.17:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.249.17:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	I0731 20:12:35.407501       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0731 20:13:00.767694       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 20:13:09.232151       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0731 20:13:09.435172       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 20:13:09.618321       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.48.131"}
	W0731 20:13:10.274026       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 20:13:15.336383       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.51.25"}
	I0731 20:13:23.073580       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.073882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 20:13:23.098979       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.099082       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 20:13:23.192410       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.192518       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 20:13:23.193029       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.193100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 20:13:23.208160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.208202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 20:13:24.204563       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 20:13:24.208750       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 20:13:24.232091       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 20:15:31.341411       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.216.116"}
	
	
	==> kube-controller-manager [890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127] <==
	W0731 20:14:39.146736       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:14:39.146777       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:14:39.273063       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:14:39.273173       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:14:45.516572       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:14:45.516685       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:14:47.134236       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:14:47.134288       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:15:19.897695       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:15:19.897807       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:15:22.674166       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:15:22.674313       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:15:27.847499       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:15:27.847608       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 20:15:31.211438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="31.36789ms"
	I0731 20:15:31.224011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="11.58098ms"
	I0731 20:15:31.236097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.041513ms"
	I0731 20:15:31.236206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="32.215µs"
	I0731 20:15:33.831789       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0731 20:15:33.835394       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="5.513µs"
	I0731 20:15:33.838024       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0731 20:15:34.147053       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.130591ms"
	I0731 20:15:34.147131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="31.366µs"
	W0731 20:15:36.873555       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:15:36.873668       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59] <==
	I0731 20:10:39.599531       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:10:39.830507       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.25"]
	I0731 20:10:40.574657       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:10:40.574721       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:10:40.574744       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:10:40.582911       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:10:40.583086       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:10:40.583096       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:10:40.595014       1 config.go:192] "Starting service config controller"
	I0731 20:10:40.595050       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:10:40.595070       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:10:40.595074       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:10:40.595428       1 config.go:319] "Starting node config controller"
	I0731 20:10:40.595433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:10:40.698923       1 shared_informer.go:320] Caches are synced for node config
	I0731 20:10:40.698952       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:10:40.698976       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61] <==
	W0731 20:10:19.499378       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 20:10:19.499446       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:10:19.537506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:10:19.537547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:10:19.538319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 20:10:19.538377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 20:10:19.548547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 20:10:19.548609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 20:10:19.611374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 20:10:19.611418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 20:10:19.637781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 20:10:19.637888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 20:10:19.642520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 20:10:19.642586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:10:19.660795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 20:10:19.660861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 20:10:19.704979       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 20:10:19.705026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 20:10:19.770043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:10:19.770404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 20:10:19.794661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 20:10:19.794703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 20:10:19.926648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:10:19.926780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0731 20:10:22.085179       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 20:15:31 addons-877061 kubelet[1280]: E0731 20:15:31.199957    1280 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2e84403-dfb7-4445-83e9-f9864386e974" containerName="tiller"
	Jul 31 20:15:31 addons-877061 kubelet[1280]: I0731 20:15:31.199989    1280 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e84403-dfb7-4445-83e9-f9864386e974" containerName="tiller"
	Jul 31 20:15:31 addons-877061 kubelet[1280]: I0731 20:15:31.199997    1280 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edd757c-c753-4040-9023-e6159d9f8cde" containerName="helm-test"
	Jul 31 20:15:31 addons-877061 kubelet[1280]: I0731 20:15:31.287327    1280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlp6t\" (UniqueName: \"kubernetes.io/projected/6fdcbce7-a259-4fcd-aef3-8ab54876051a-kube-api-access-rlp6t\") pod \"hello-world-app-6778b5fc9f-fkk6w\" (UID: \"6fdcbce7-a259-4fcd-aef3-8ab54876051a\") " pod="default/hello-world-app-6778b5fc9f-fkk6w"
	Jul 31 20:15:32 addons-877061 kubelet[1280]: I0731 20:15:32.196590    1280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcbdk\" (UniqueName: \"kubernetes.io/projected/df8e5ae0-bd13-4bca-a087-107e89be68cd-kube-api-access-dcbdk\") pod \"df8e5ae0-bd13-4bca-a087-107e89be68cd\" (UID: \"df8e5ae0-bd13-4bca-a087-107e89be68cd\") "
	Jul 31 20:15:32 addons-877061 kubelet[1280]: I0731 20:15:32.198611    1280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8e5ae0-bd13-4bca-a087-107e89be68cd-kube-api-access-dcbdk" (OuterVolumeSpecName: "kube-api-access-dcbdk") pod "df8e5ae0-bd13-4bca-a087-107e89be68cd" (UID: "df8e5ae0-bd13-4bca-a087-107e89be68cd"). InnerVolumeSpecName "kube-api-access-dcbdk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 20:15:32 addons-877061 kubelet[1280]: I0731 20:15:32.297302    1280 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dcbdk\" (UniqueName: \"kubernetes.io/projected/df8e5ae0-bd13-4bca-a087-107e89be68cd-kube-api-access-dcbdk\") on node \"addons-877061\" DevicePath \"\""
	Jul 31 20:15:33 addons-877061 kubelet[1280]: I0731 20:15:33.118994    1280 scope.go:117] "RemoveContainer" containerID="fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07"
	Jul 31 20:15:33 addons-877061 kubelet[1280]: I0731 20:15:33.146690    1280 scope.go:117] "RemoveContainer" containerID="fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07"
	Jul 31 20:15:33 addons-877061 kubelet[1280]: E0731 20:15:33.147170    1280 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07\": container with ID starting with fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07 not found: ID does not exist" containerID="fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07"
	Jul 31 20:15:33 addons-877061 kubelet[1280]: I0731 20:15:33.147209    1280 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07"} err="failed to get container status \"fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07\": rpc error: code = NotFound desc = could not find container \"fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07\": container with ID starting with fbe09a42bda6a0c177bb46a6c4709435c00ff81870cf99a6c63388d484c97e07 not found: ID does not exist"
	Jul 31 20:15:33 addons-877061 kubelet[1280]: I0731 20:15:33.363491    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8e5ae0-bd13-4bca-a087-107e89be68cd" path="/var/lib/kubelet/pods/df8e5ae0-bd13-4bca-a087-107e89be68cd/volumes"
	Jul 31 20:15:35 addons-877061 kubelet[1280]: I0731 20:15:35.363340    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73db615e-01a6-4977-9e7b-977dc14e48d2" path="/var/lib/kubelet/pods/73db615e-01a6-4977-9e7b-977dc14e48d2/volumes"
	Jul 31 20:15:35 addons-877061 kubelet[1280]: I0731 20:15:35.363710    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8721b649-1364-402b-a76e-027c2ad79a82" path="/var/lib/kubelet/pods/8721b649-1364-402b-a76e-027c2ad79a82/volumes"
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.032216    1280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8021396-817e-4624-b107-071b7d75e3fe-webhook-cert\") pod \"d8021396-817e-4624-b107-071b7d75e3fe\" (UID: \"d8021396-817e-4624-b107-071b7d75e3fe\") "
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.032262    1280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s68tj\" (UniqueName: \"kubernetes.io/projected/d8021396-817e-4624-b107-071b7d75e3fe-kube-api-access-s68tj\") pod \"d8021396-817e-4624-b107-071b7d75e3fe\" (UID: \"d8021396-817e-4624-b107-071b7d75e3fe\") "
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.034010    1280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8021396-817e-4624-b107-071b7d75e3fe-kube-api-access-s68tj" (OuterVolumeSpecName: "kube-api-access-s68tj") pod "d8021396-817e-4624-b107-071b7d75e3fe" (UID: "d8021396-817e-4624-b107-071b7d75e3fe"). InnerVolumeSpecName "kube-api-access-s68tj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.035001    1280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8021396-817e-4624-b107-071b7d75e3fe-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d8021396-817e-4624-b107-071b7d75e3fe" (UID: "d8021396-817e-4624-b107-071b7d75e3fe"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.132981    1280 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8021396-817e-4624-b107-071b7d75e3fe-webhook-cert\") on node \"addons-877061\" DevicePath \"\""
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.133016    1280 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s68tj\" (UniqueName: \"kubernetes.io/projected/d8021396-817e-4624-b107-071b7d75e3fe-kube-api-access-s68tj\") on node \"addons-877061\" DevicePath \"\""
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.142792    1280 scope.go:117] "RemoveContainer" containerID="a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6"
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.161226    1280 scope.go:117] "RemoveContainer" containerID="a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6"
	Jul 31 20:15:37 addons-877061 kubelet[1280]: E0731 20:15:37.161626    1280 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6\": container with ID starting with a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6 not found: ID does not exist" containerID="a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6"
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.161665    1280 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6"} err="failed to get container status \"a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6\": rpc error: code = NotFound desc = could not find container \"a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6\": container with ID starting with a58002e9d5ebc184b65bc734498a76ab370b63e2c2a3a7e28863d24d7a7eced6 not found: ID does not exist"
	Jul 31 20:15:37 addons-877061 kubelet[1280]: I0731 20:15:37.363623    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8021396-817e-4624-b107-071b7d75e3fe" path="/var/lib/kubelet/pods/d8021396-817e-4624-b107-071b7d75e3fe/volumes"
	
	
	==> storage-provisioner [903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f] <==
	I0731 20:10:41.971307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:10:41.989264       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:10:41.989313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 20:10:41.999219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 20:10:41.999399       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-877061_f5473ff4-88c1-48cb-8677-b79126ba55df!
	I0731 20:10:42.001061       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0892d660-346a-46f4-9d67-b3b13db61f13", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-877061_f5473ff4-88c1-48cb-8677-b79126ba55df became leader
	I0731 20:10:42.100407       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-877061_f5473ff4-88c1-48cb-8677-b79126ba55df!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-877061 -n addons-877061
helpers_test.go:261: (dbg) Run:  kubectl --context addons-877061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.63s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (356.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.388969ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-szt4w" [815a74e0-c39f-4673-8b08-290908785d21] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005855047s
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (69.574896ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 2m10.482619927s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (67.451179ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 2m12.681511327s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (66.523817ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 2m18.161278432s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (71.414214ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 2m27.058048784s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (72.579738ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 2m41.550355264s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (62.056321ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 3m2.145681362s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (62.774456ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 3m22.505885245s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (62.663475ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 3m57.054050366s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (63.227704ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 4m37.445205171s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (64.408953ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 5m53.318390003s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (63.28044ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 6m49.647239125s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-877061 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-877061 top pods -n kube-system: exit status 1 (68.410279ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pjvjp, age: 7m58.281414242s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-877061 -n addons-877061
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-877061 logs -n 25: (1.171236656s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-363533                                                                     | download-only-363533 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-782974 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | binary-mirror-782974                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40035                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-782974                                                                     | binary-mirror-782974 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| addons  | enable dashboard -p                                                                         | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | addons-877061                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | addons-877061                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-877061 --wait=true                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:11 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-877061 ssh cat                                                                       | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	|         | /opt/local-path-provisioner/pvc-dc514d6f-0e3d-4ea7-a5f8-6c9da90dff2a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:13 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-877061 ip                                                                            | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:12 UTC | 31 Jul 24 20:12 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | -p addons-877061                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | addons-877061                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | addons-877061                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | -p addons-877061                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-877061 addons                                                                        | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-877061 ssh curl -s                                                                   | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-877061 addons                                                                        | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:13 UTC | 31 Jul 24 20:13 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-877061 ip                                                                            | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:15 UTC | 31 Jul 24 20:15 UTC |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:15 UTC | 31 Jul 24 20:15 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-877061 addons disable                                                                | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:15 UTC | 31 Jul 24 20:15 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-877061 addons                                                                        | addons-877061        | jenkins | v1.33.1 | 31 Jul 24 20:18 UTC | 31 Jul 24 20:18 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:09:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:09:41.662763 1101872 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:09:41.662874 1101872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:41.662884 1101872 out.go:304] Setting ErrFile to fd 2...
	I0731 20:09:41.662889 1101872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:41.663094 1101872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:09:41.663749 1101872 out.go:298] Setting JSON to false
	I0731 20:09:41.664864 1101872 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13933,"bootTime":1722442649,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:09:41.664925 1101872 start.go:139] virtualization: kvm guest
	I0731 20:09:41.667171 1101872 out.go:177] * [addons-877061] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:09:41.668577 1101872 notify.go:220] Checking for updates...
	I0731 20:09:41.668585 1101872 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:09:41.669954 1101872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:09:41.671237 1101872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:09:41.672530 1101872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:41.673731 1101872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:09:41.675012 1101872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:09:41.676313 1101872 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:09:41.708730 1101872 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 20:09:41.709984 1101872 start.go:297] selected driver: kvm2
	I0731 20:09:41.709996 1101872 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:09:41.710008 1101872 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:09:41.710755 1101872 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:09:41.710840 1101872 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:09:41.725856 1101872 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:09:41.725916 1101872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 20:09:41.726113 1101872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:09:41.726164 1101872 cni.go:84] Creating CNI manager for ""
	I0731 20:09:41.726172 1101872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:09:41.726180 1101872 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 20:09:41.726255 1101872 start.go:340] cluster config:
	{Name:addons-877061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:09:41.726353 1101872 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:09:41.728236 1101872 out.go:177] * Starting "addons-877061" primary control-plane node in "addons-877061" cluster
	I0731 20:09:41.729531 1101872 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:09:41.729574 1101872 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:09:41.729585 1101872 cache.go:56] Caching tarball of preloaded images
	I0731 20:09:41.729663 1101872 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:09:41.729674 1101872 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:09:41.729952 1101872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/config.json ...
	I0731 20:09:41.729970 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/config.json: {Name:mkd574fe00eb57092056af5a3f09f0afc5a84337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:09:41.730107 1101872 start.go:360] acquireMachinesLock for addons-877061: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:09:41.730153 1101872 start.go:364] duration metric: took 32.305µs to acquireMachinesLock for "addons-877061"
	I0731 20:09:41.730171 1101872 start.go:93] Provisioning new machine with config: &{Name:addons-877061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:09:41.730226 1101872 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 20:09:41.731857 1101872 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 20:09:41.732037 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:09:41.732108 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:09:41.747374 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0731 20:09:41.748130 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:09:41.748908 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:09:41.748943 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:09:41.749369 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:09:41.749589 1101872 main.go:141] libmachine: (addons-877061) Calling .GetMachineName
	I0731 20:09:41.749788 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:09:41.749951 1101872 start.go:159] libmachine.API.Create for "addons-877061" (driver="kvm2")
	I0731 20:09:41.749988 1101872 client.go:168] LocalClient.Create starting
	I0731 20:09:41.750036 1101872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 20:09:41.896487 1101872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 20:09:42.021690 1101872 main.go:141] libmachine: Running pre-create checks...
	I0731 20:09:42.021719 1101872 main.go:141] libmachine: (addons-877061) Calling .PreCreateCheck
	I0731 20:09:42.022258 1101872 main.go:141] libmachine: (addons-877061) Calling .GetConfigRaw
	I0731 20:09:42.022748 1101872 main.go:141] libmachine: Creating machine...
	I0731 20:09:42.022772 1101872 main.go:141] libmachine: (addons-877061) Calling .Create
	I0731 20:09:42.022957 1101872 main.go:141] libmachine: (addons-877061) Creating KVM machine...
	I0731 20:09:42.024328 1101872 main.go:141] libmachine: (addons-877061) DBG | found existing default KVM network
	I0731 20:09:42.025255 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.025078 1101894 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c30}
	I0731 20:09:42.025323 1101872 main.go:141] libmachine: (addons-877061) DBG | created network xml: 
	I0731 20:09:42.025353 1101872 main.go:141] libmachine: (addons-877061) DBG | <network>
	I0731 20:09:42.025366 1101872 main.go:141] libmachine: (addons-877061) DBG |   <name>mk-addons-877061</name>
	I0731 20:09:42.025376 1101872 main.go:141] libmachine: (addons-877061) DBG |   <dns enable='no'/>
	I0731 20:09:42.025382 1101872 main.go:141] libmachine: (addons-877061) DBG |   
	I0731 20:09:42.025391 1101872 main.go:141] libmachine: (addons-877061) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 20:09:42.025397 1101872 main.go:141] libmachine: (addons-877061) DBG |     <dhcp>
	I0731 20:09:42.025404 1101872 main.go:141] libmachine: (addons-877061) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 20:09:42.025409 1101872 main.go:141] libmachine: (addons-877061) DBG |     </dhcp>
	I0731 20:09:42.025417 1101872 main.go:141] libmachine: (addons-877061) DBG |   </ip>
	I0731 20:09:42.025422 1101872 main.go:141] libmachine: (addons-877061) DBG |   
	I0731 20:09:42.025429 1101872 main.go:141] libmachine: (addons-877061) DBG | </network>
	I0731 20:09:42.025444 1101872 main.go:141] libmachine: (addons-877061) DBG | 
	I0731 20:09:42.031118 1101872 main.go:141] libmachine: (addons-877061) DBG | trying to create private KVM network mk-addons-877061 192.168.39.0/24...
	I0731 20:09:42.096602 1101872 main.go:141] libmachine: (addons-877061) DBG | private KVM network mk-addons-877061 192.168.39.0/24 created
	I0731 20:09:42.096641 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.096547 1101894 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:42.096666 1101872 main.go:141] libmachine: (addons-877061) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061 ...
	I0731 20:09:42.096683 1101872 main.go:141] libmachine: (addons-877061) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:09:42.096799 1101872 main.go:141] libmachine: (addons-877061) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 20:09:42.363268 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.363125 1101894 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa...
	I0731 20:09:42.403134 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.403014 1101894 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/addons-877061.rawdisk...
	I0731 20:09:42.403169 1101872 main.go:141] libmachine: (addons-877061) DBG | Writing magic tar header
	I0731 20:09:42.403185 1101872 main.go:141] libmachine: (addons-877061) DBG | Writing SSH key tar header
	I0731 20:09:42.403278 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:42.403199 1101894 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061 ...
	I0731 20:09:42.403340 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061
	I0731 20:09:42.403363 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061 (perms=drwx------)
	I0731 20:09:42.403374 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 20:09:42.403386 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:09:42.403396 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:42.403410 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 20:09:42.403419 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:09:42.403430 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:09:42.403435 1101872 main.go:141] libmachine: (addons-877061) DBG | Checking permissions on dir: /home
	I0731 20:09:42.403445 1101872 main.go:141] libmachine: (addons-877061) DBG | Skipping /home - not owner
	I0731 20:09:42.403490 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 20:09:42.403512 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 20:09:42.403526 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:09:42.403535 1101872 main.go:141] libmachine: (addons-877061) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:09:42.403544 1101872 main.go:141] libmachine: (addons-877061) Creating domain...
	I0731 20:09:42.404648 1101872 main.go:141] libmachine: (addons-877061) define libvirt domain using xml: 
	I0731 20:09:42.404678 1101872 main.go:141] libmachine: (addons-877061) <domain type='kvm'>
	I0731 20:09:42.404686 1101872 main.go:141] libmachine: (addons-877061)   <name>addons-877061</name>
	I0731 20:09:42.404694 1101872 main.go:141] libmachine: (addons-877061)   <memory unit='MiB'>4000</memory>
	I0731 20:09:42.404702 1101872 main.go:141] libmachine: (addons-877061)   <vcpu>2</vcpu>
	I0731 20:09:42.404716 1101872 main.go:141] libmachine: (addons-877061)   <features>
	I0731 20:09:42.404744 1101872 main.go:141] libmachine: (addons-877061)     <acpi/>
	I0731 20:09:42.404767 1101872 main.go:141] libmachine: (addons-877061)     <apic/>
	I0731 20:09:42.404785 1101872 main.go:141] libmachine: (addons-877061)     <pae/>
	I0731 20:09:42.404799 1101872 main.go:141] libmachine: (addons-877061)     
	I0731 20:09:42.404815 1101872 main.go:141] libmachine: (addons-877061)   </features>
	I0731 20:09:42.404845 1101872 main.go:141] libmachine: (addons-877061)   <cpu mode='host-passthrough'>
	I0731 20:09:42.404872 1101872 main.go:141] libmachine: (addons-877061)   
	I0731 20:09:42.404886 1101872 main.go:141] libmachine: (addons-877061)   </cpu>
	I0731 20:09:42.404899 1101872 main.go:141] libmachine: (addons-877061)   <os>
	I0731 20:09:42.404916 1101872 main.go:141] libmachine: (addons-877061)     <type>hvm</type>
	I0731 20:09:42.404932 1101872 main.go:141] libmachine: (addons-877061)     <boot dev='cdrom'/>
	I0731 20:09:42.404950 1101872 main.go:141] libmachine: (addons-877061)     <boot dev='hd'/>
	I0731 20:09:42.404963 1101872 main.go:141] libmachine: (addons-877061)     <bootmenu enable='no'/>
	I0731 20:09:42.404973 1101872 main.go:141] libmachine: (addons-877061)   </os>
	I0731 20:09:42.404981 1101872 main.go:141] libmachine: (addons-877061)   <devices>
	I0731 20:09:42.404988 1101872 main.go:141] libmachine: (addons-877061)     <disk type='file' device='cdrom'>
	I0731 20:09:42.404998 1101872 main.go:141] libmachine: (addons-877061)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/boot2docker.iso'/>
	I0731 20:09:42.405006 1101872 main.go:141] libmachine: (addons-877061)       <target dev='hdc' bus='scsi'/>
	I0731 20:09:42.405013 1101872 main.go:141] libmachine: (addons-877061)       <readonly/>
	I0731 20:09:42.405018 1101872 main.go:141] libmachine: (addons-877061)     </disk>
	I0731 20:09:42.405026 1101872 main.go:141] libmachine: (addons-877061)     <disk type='file' device='disk'>
	I0731 20:09:42.405036 1101872 main.go:141] libmachine: (addons-877061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:09:42.405044 1101872 main.go:141] libmachine: (addons-877061)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/addons-877061.rawdisk'/>
	I0731 20:09:42.405053 1101872 main.go:141] libmachine: (addons-877061)       <target dev='hda' bus='virtio'/>
	I0731 20:09:42.405059 1101872 main.go:141] libmachine: (addons-877061)     </disk>
	I0731 20:09:42.405065 1101872 main.go:141] libmachine: (addons-877061)     <interface type='network'>
	I0731 20:09:42.405073 1101872 main.go:141] libmachine: (addons-877061)       <source network='mk-addons-877061'/>
	I0731 20:09:42.405085 1101872 main.go:141] libmachine: (addons-877061)       <model type='virtio'/>
	I0731 20:09:42.405092 1101872 main.go:141] libmachine: (addons-877061)     </interface>
	I0731 20:09:42.405097 1101872 main.go:141] libmachine: (addons-877061)     <interface type='network'>
	I0731 20:09:42.405105 1101872 main.go:141] libmachine: (addons-877061)       <source network='default'/>
	I0731 20:09:42.405110 1101872 main.go:141] libmachine: (addons-877061)       <model type='virtio'/>
	I0731 20:09:42.405120 1101872 main.go:141] libmachine: (addons-877061)     </interface>
	I0731 20:09:42.405133 1101872 main.go:141] libmachine: (addons-877061)     <serial type='pty'>
	I0731 20:09:42.405147 1101872 main.go:141] libmachine: (addons-877061)       <target port='0'/>
	I0731 20:09:42.405159 1101872 main.go:141] libmachine: (addons-877061)     </serial>
	I0731 20:09:42.405179 1101872 main.go:141] libmachine: (addons-877061)     <console type='pty'>
	I0731 20:09:42.405191 1101872 main.go:141] libmachine: (addons-877061)       <target type='serial' port='0'/>
	I0731 20:09:42.405201 1101872 main.go:141] libmachine: (addons-877061)     </console>
	I0731 20:09:42.405212 1101872 main.go:141] libmachine: (addons-877061)     <rng model='virtio'>
	I0731 20:09:42.405225 1101872 main.go:141] libmachine: (addons-877061)       <backend model='random'>/dev/random</backend>
	I0731 20:09:42.405235 1101872 main.go:141] libmachine: (addons-877061)     </rng>
	I0731 20:09:42.405245 1101872 main.go:141] libmachine: (addons-877061)     
	I0731 20:09:42.405256 1101872 main.go:141] libmachine: (addons-877061)     
	I0731 20:09:42.405266 1101872 main.go:141] libmachine: (addons-877061)   </devices>
	I0731 20:09:42.405277 1101872 main.go:141] libmachine: (addons-877061) </domain>
	I0731 20:09:42.405286 1101872 main.go:141] libmachine: (addons-877061) 
	I0731 20:09:42.411334 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:df:0f:9e in network default
	I0731 20:09:42.411928 1101872 main.go:141] libmachine: (addons-877061) Ensuring networks are active...
	I0731 20:09:42.411947 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:42.412594 1101872 main.go:141] libmachine: (addons-877061) Ensuring network default is active
	I0731 20:09:42.412904 1101872 main.go:141] libmachine: (addons-877061) Ensuring network mk-addons-877061 is active
	I0731 20:09:42.413406 1101872 main.go:141] libmachine: (addons-877061) Getting domain xml...
	I0731 20:09:42.414120 1101872 main.go:141] libmachine: (addons-877061) Creating domain...
	I0731 20:09:43.851922 1101872 main.go:141] libmachine: (addons-877061) Waiting to get IP...
	I0731 20:09:43.852863 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:43.853261 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:43.853285 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:43.853242 1101894 retry.go:31] will retry after 298.181213ms: waiting for machine to come up
	I0731 20:09:44.152997 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:44.153454 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:44.153491 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:44.153409 1101894 retry.go:31] will retry after 252.414928ms: waiting for machine to come up
	I0731 20:09:44.407994 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:44.408426 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:44.408457 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:44.408362 1101894 retry.go:31] will retry after 348.212309ms: waiting for machine to come up
	I0731 20:09:44.757936 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:44.758433 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:44.758458 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:44.758372 1101894 retry.go:31] will retry after 496.150607ms: waiting for machine to come up
	I0731 20:09:45.255934 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:45.256368 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:45.256391 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:45.256326 1101894 retry.go:31] will retry after 608.889823ms: waiting for machine to come up
	I0731 20:09:45.867608 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:45.868045 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:45.868074 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:45.867996 1101894 retry.go:31] will retry after 862.084322ms: waiting for machine to come up
	I0731 20:09:46.731956 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:46.732339 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:46.732373 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:46.732290 1101894 retry.go:31] will retry after 1.17249745s: waiting for machine to come up
	I0731 20:09:47.907191 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:47.907637 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:47.907669 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:47.907573 1101894 retry.go:31] will retry after 1.355826093s: waiting for machine to come up
	I0731 20:09:49.264747 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:49.265174 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:49.265206 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:49.265125 1101894 retry.go:31] will retry after 1.229798824s: waiting for machine to come up
	I0731 20:09:50.496596 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:50.497049 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:50.497083 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:50.496994 1101894 retry.go:31] will retry after 1.45034615s: waiting for machine to come up
	I0731 20:09:51.948563 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:51.949050 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:51.949083 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:51.949001 1101894 retry.go:31] will retry after 1.754586547s: waiting for machine to come up
	I0731 20:09:53.705998 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:53.706421 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:53.706534 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:53.706456 1101894 retry.go:31] will retry after 3.4501379s: waiting for machine to come up
	I0731 20:09:57.158577 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:09:57.159087 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:09:57.159112 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:09:57.158989 1101894 retry.go:31] will retry after 3.279487567s: waiting for machine to come up
	I0731 20:10:00.442593 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:00.442990 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find current IP address of domain addons-877061 in network mk-addons-877061
	I0731 20:10:00.443015 1101872 main.go:141] libmachine: (addons-877061) DBG | I0731 20:10:00.442942 1101894 retry.go:31] will retry after 3.601297589s: waiting for machine to come up
	I0731 20:10:04.045584 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.046009 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has current primary IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.046029 1101872 main.go:141] libmachine: (addons-877061) Found IP for machine: 192.168.39.25
	I0731 20:10:04.046072 1101872 main.go:141] libmachine: (addons-877061) Reserving static IP address...
	I0731 20:10:04.046401 1101872 main.go:141] libmachine: (addons-877061) DBG | unable to find host DHCP lease matching {name: "addons-877061", mac: "52:54:00:2c:19:b6", ip: "192.168.39.25"} in network mk-addons-877061
	I0731 20:10:04.120261 1101872 main.go:141] libmachine: (addons-877061) DBG | Getting to WaitForSSH function...
	I0731 20:10:04.120294 1101872 main.go:141] libmachine: (addons-877061) Reserved static IP address: 192.168.39.25
	I0731 20:10:04.120307 1101872 main.go:141] libmachine: (addons-877061) Waiting for SSH to be available...
	I0731 20:10:04.122753 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.123163 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.123197 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.123298 1101872 main.go:141] libmachine: (addons-877061) DBG | Using SSH client type: external
	I0731 20:10:04.123323 1101872 main.go:141] libmachine: (addons-877061) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa (-rw-------)
	I0731 20:10:04.123354 1101872 main.go:141] libmachine: (addons-877061) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:10:04.123371 1101872 main.go:141] libmachine: (addons-877061) DBG | About to run SSH command:
	I0731 20:10:04.123382 1101872 main.go:141] libmachine: (addons-877061) DBG | exit 0
	I0731 20:10:04.251947 1101872 main.go:141] libmachine: (addons-877061) DBG | SSH cmd err, output: <nil>: 
	I0731 20:10:04.252250 1101872 main.go:141] libmachine: (addons-877061) KVM machine creation complete!
	I0731 20:10:04.252561 1101872 main.go:141] libmachine: (addons-877061) Calling .GetConfigRaw
	I0731 20:10:04.253104 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:04.253294 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:04.253452 1101872 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:10:04.253468 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:04.254738 1101872 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:10:04.254754 1101872 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:10:04.254759 1101872 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:10:04.254766 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.256882 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.257190 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.257214 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.257411 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.257617 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.257779 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.257919 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.258093 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.258348 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.258366 1101872 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:10:04.367255 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:10:04.367285 1101872 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:10:04.367294 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.370014 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.370394 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.370419 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.370584 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.370790 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.370999 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.371187 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.371409 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.371596 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.371607 1101872 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:10:04.484156 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:10:04.484267 1101872 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:10:04.484284 1101872 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:10:04.484298 1101872 main.go:141] libmachine: (addons-877061) Calling .GetMachineName
	I0731 20:10:04.484605 1101872 buildroot.go:166] provisioning hostname "addons-877061"
	I0731 20:10:04.484634 1101872 main.go:141] libmachine: (addons-877061) Calling .GetMachineName
	I0731 20:10:04.484863 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.487182 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.487480 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.487509 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.487700 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.487916 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.488098 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.488242 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.488411 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.488630 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.488645 1101872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-877061 && echo "addons-877061" | sudo tee /etc/hostname
	I0731 20:10:04.612040 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-877061
	
	I0731 20:10:04.612079 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.614823 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.615240 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.615295 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.615452 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.615671 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.615861 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.616053 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.616252 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.616445 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.616461 1101872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-877061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-877061/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-877061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:10:04.735643 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:10:04.735674 1101872 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:10:04.735731 1101872 buildroot.go:174] setting up certificates
	I0731 20:10:04.735743 1101872 provision.go:84] configureAuth start
	I0731 20:10:04.735756 1101872 main.go:141] libmachine: (addons-877061) Calling .GetMachineName
	I0731 20:10:04.736028 1101872 main.go:141] libmachine: (addons-877061) Calling .GetIP
	I0731 20:10:04.738410 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.738738 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.738778 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.738910 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.740917 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.741230 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.741251 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.741395 1101872 provision.go:143] copyHostCerts
	I0731 20:10:04.741482 1101872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:10:04.741623 1101872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:10:04.741683 1101872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:10:04.741736 1101872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.addons-877061 san=[127.0.0.1 192.168.39.25 addons-877061 localhost minikube]
	I0731 20:10:04.841410 1101872 provision.go:177] copyRemoteCerts
	I0731 20:10:04.841474 1101872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:10:04.841503 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.843984 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.844311 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.844347 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.844481 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.844716 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.844875 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.845040 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:04.929228 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:10:04.950948 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 20:10:04.971769 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:10:04.992732 1101872 provision.go:87] duration metric: took 256.974803ms to configureAuth
	I0731 20:10:04.992759 1101872 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:10:04.992921 1101872 config.go:182] Loaded profile config "addons-877061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:10:04.993001 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:04.995547 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.995927 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:04.995954 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:04.996129 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:04.996351 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.996545 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:04.996663 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:04.996829 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:04.997012 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:04.997031 1101872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:10:05.262830 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:10:05.262887 1101872 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:10:05.262901 1101872 main.go:141] libmachine: (addons-877061) Calling .GetURL
	I0731 20:10:05.264296 1101872 main.go:141] libmachine: (addons-877061) DBG | Using libvirt version 6000000
	I0731 20:10:05.266672 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.267102 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.267131 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.267284 1101872 main.go:141] libmachine: Docker is up and running!
	I0731 20:10:05.267302 1101872 main.go:141] libmachine: Reticulating splines...
	I0731 20:10:05.267310 1101872 client.go:171] duration metric: took 23.517308382s to LocalClient.Create
	I0731 20:10:05.267336 1101872 start.go:167] duration metric: took 23.517385394s to libmachine.API.Create "addons-877061"
	I0731 20:10:05.267370 1101872 start.go:293] postStartSetup for "addons-877061" (driver="kvm2")
	I0731 20:10:05.267386 1101872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:10:05.267410 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.267698 1101872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:10:05.267726 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:05.270092 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.270402 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.270427 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.270528 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:05.270721 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.270905 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:05.271072 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:05.357644 1101872 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:10:05.361368 1101872 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:10:05.361397 1101872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:10:05.361475 1101872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:10:05.361501 1101872 start.go:296] duration metric: took 94.121882ms for postStartSetup
	I0731 20:10:05.361544 1101872 main.go:141] libmachine: (addons-877061) Calling .GetConfigRaw
	I0731 20:10:05.362194 1101872 main.go:141] libmachine: (addons-877061) Calling .GetIP
	I0731 20:10:05.364534 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.364915 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.364937 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.365168 1101872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/config.json ...
	I0731 20:10:05.365350 1101872 start.go:128] duration metric: took 23.635112572s to createHost
	I0731 20:10:05.365402 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:05.368056 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.368395 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.368435 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.368537 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:05.368754 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.368946 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.369058 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:05.369214 1101872 main.go:141] libmachine: Using SSH client type: native
	I0731 20:10:05.369425 1101872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0731 20:10:05.369441 1101872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:10:05.480248 1101872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722456605.458262123
	
	I0731 20:10:05.480273 1101872 fix.go:216] guest clock: 1722456605.458262123
	I0731 20:10:05.480281 1101872 fix.go:229] Guest: 2024-07-31 20:10:05.458262123 +0000 UTC Remote: 2024-07-31 20:10:05.365363546 +0000 UTC m=+23.736809928 (delta=92.898577ms)
	I0731 20:10:05.480336 1101872 fix.go:200] guest clock delta is within tolerance: 92.898577ms
	I0731 20:10:05.480347 1101872 start.go:83] releasing machines lock for "addons-877061", held for 23.750182454s
	I0731 20:10:05.480373 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.480677 1101872 main.go:141] libmachine: (addons-877061) Calling .GetIP
	I0731 20:10:05.483179 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.483497 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.483532 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.483725 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.484233 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.484465 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:05.484606 1101872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:10:05.484654 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:05.484703 1101872 ssh_runner.go:195] Run: cat /version.json
	I0731 20:10:05.484733 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:05.487061 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.487415 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.487447 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.487469 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.487719 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:05.487928 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.487937 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:05.487973 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:05.488074 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:05.488171 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:05.488262 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:05.488333 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:05.488377 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:05.488541 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:05.588468 1101872 ssh_runner.go:195] Run: systemctl --version
	I0731 20:10:05.593755 1101872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:10:05.748616 1101872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:10:05.754239 1101872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:10:05.754316 1101872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:10:05.768678 1101872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:10:05.768704 1101872 start.go:495] detecting cgroup driver to use...
	I0731 20:10:05.768772 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:10:05.784180 1101872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:10:05.797071 1101872 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:10:05.797121 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:10:05.809431 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:10:05.821709 1101872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:10:05.935050 1101872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:10:06.088122 1101872 docker.go:233] disabling docker service ...
	I0731 20:10:06.088194 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:10:06.102213 1101872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:10:06.114209 1101872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:10:06.227528 1101872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:10:06.342502 1101872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:10:06.355909 1101872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:10:06.372427 1101872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:10:06.372504 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.382299 1101872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:10:06.382366 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.392003 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.401384 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.410875 1101872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:10:06.420523 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.430013 1101872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.445312 1101872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:10:06.454998 1101872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:10:06.463829 1101872 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:10:06.463885 1101872 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:10:06.476194 1101872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:10:06.484963 1101872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:10:06.595215 1101872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:10:06.721069 1101872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:10:06.721169 1101872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:10:06.725362 1101872 start.go:563] Will wait 60s for crictl version
	I0731 20:10:06.725439 1101872 ssh_runner.go:195] Run: which crictl
	I0731 20:10:06.728681 1101872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:10:06.763238 1101872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:10:06.763367 1101872 ssh_runner.go:195] Run: crio --version
	I0731 20:10:06.788732 1101872 ssh_runner.go:195] Run: crio --version
	I0731 20:10:06.823043 1101872 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:10:06.824459 1101872 main.go:141] libmachine: (addons-877061) Calling .GetIP
	I0731 20:10:06.826944 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:06.827304 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:06.827335 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:06.827567 1101872 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:10:06.831392 1101872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:10:06.843382 1101872 kubeadm.go:883] updating cluster {Name:addons-877061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:10:06.843534 1101872 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:10:06.843595 1101872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:10:06.871904 1101872 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:10:06.871981 1101872 ssh_runner.go:195] Run: which lz4
	I0731 20:10:06.875535 1101872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:10:06.879154 1101872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:10:06.879188 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:10:08.065441 1101872 crio.go:462] duration metric: took 1.189930085s to copy over tarball
	I0731 20:10:08.065549 1101872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:10:10.234643 1101872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.169058277s)
	I0731 20:10:10.234676 1101872 crio.go:469] duration metric: took 2.169196058s to extract the tarball
	I0731 20:10:10.234687 1101872 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:10:10.271319 1101872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:10:10.309858 1101872 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:10:10.309889 1101872 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:10:10.309902 1101872 kubeadm.go:934] updating node { 192.168.39.25 8443 v1.30.3 crio true true} ...
	I0731 20:10:10.310041 1101872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-877061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:10:10.310132 1101872 ssh_runner.go:195] Run: crio config
	I0731 20:10:10.355459 1101872 cni.go:84] Creating CNI manager for ""
	I0731 20:10:10.355483 1101872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:10:10.355506 1101872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:10:10.355542 1101872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.25 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-877061 NodeName:addons-877061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:10:10.355718 1101872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-877061"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:10:10.355812 1101872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:10:10.364927 1101872 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:10:10.365004 1101872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:10:10.373493 1101872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 20:10:10.388903 1101872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:10:10.403600 1101872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0731 20:10:10.418483 1101872 ssh_runner.go:195] Run: grep 192.168.39.25	control-plane.minikube.internal$ /etc/hosts
	I0731 20:10:10.422021 1101872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:10:10.432755 1101872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:10:10.545580 1101872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:10:10.561846 1101872 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061 for IP: 192.168.39.25
	I0731 20:10:10.561876 1101872 certs.go:194] generating shared ca certs ...
	I0731 20:10:10.561911 1101872 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.562105 1101872 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:10:10.608298 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt ...
	I0731 20:10:10.608330 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt: {Name:mk2ab08007953158416a03ea13176bac62a60120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.608526 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key ...
	I0731 20:10:10.608541 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key: {Name:mk996214a0f78812401e96bd781853b13ddbdc3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.608652 1101872 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:10:10.936721 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt ...
	I0731 20:10:10.936755 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt: {Name:mk355b96fd4550604698f58523265e11d1e33ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.936931 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key ...
	I0731 20:10:10.936942 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key: {Name:mkd3d94eb66d256de4785040cd6e2e932ccf8f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:10.937010 1101872 certs.go:256] generating profile certs ...
	I0731 20:10:10.937086 1101872 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.key
	I0731 20:10:10.937100 1101872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt with IP's: []
	I0731 20:10:11.069668 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt ...
	I0731 20:10:11.069699 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: {Name:mk1d4f549e753268fa7d38fab982a5df48bacdc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.069879 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.key ...
	I0731 20:10:11.069890 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.key: {Name:mk10b93e096972e82f2279fa4f9ced407e6fd21a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.069962 1101872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key.1c331ecc
	I0731 20:10:11.069980 1101872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt.1c331ecc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.25]
	I0731 20:10:11.503254 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt.1c331ecc ...
	I0731 20:10:11.503295 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt.1c331ecc: {Name:mkcb2470601fe2c34add3a88327863ce2693a403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.503486 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key.1c331ecc ...
	I0731 20:10:11.503501 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key.1c331ecc: {Name:mk98a8de165b4bf52d24b342e7677707c99a7698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.503575 1101872 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt.1c331ecc -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt
	I0731 20:10:11.503650 1101872 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key.1c331ecc -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key
	I0731 20:10:11.503697 1101872 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.key
	I0731 20:10:11.503716 1101872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.crt with IP's: []
	I0731 20:10:11.713642 1101872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.crt ...
	I0731 20:10:11.713674 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.crt: {Name:mkeb5cf10009dd08cd5003aba20a9c24b8ff2be1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.713851 1101872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.key ...
	I0731 20:10:11.713865 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.key: {Name:mk4cadf9987a7b4c2587b5bc22f415734c532f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:11.714039 1101872 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:10:11.714075 1101872 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:10:11.714100 1101872 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:10:11.714125 1101872 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:10:11.714740 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:10:11.738701 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:10:11.761522 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:10:11.783895 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:10:11.805016 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 20:10:11.825906 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:10:11.847927 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:10:11.869959 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:10:11.891542 1101872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:10:11.914991 1101872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:10:11.933231 1101872 ssh_runner.go:195] Run: openssl version
	I0731 20:10:11.938889 1101872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:10:11.948972 1101872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:10:11.961416 1101872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:10:11.961495 1101872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:10:11.969427 1101872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:10:11.980808 1101872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:10:11.984724 1101872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:10:11.984792 1101872 kubeadm.go:392] StartCluster: {Name:addons-877061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-877061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:10:11.984900 1101872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:10:11.984982 1101872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:10:12.020864 1101872 cri.go:89] found id: ""
	I0731 20:10:12.020959 1101872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:10:12.030290 1101872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:10:12.039318 1101872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:10:12.048065 1101872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:10:12.048103 1101872 kubeadm.go:157] found existing configuration files:
	
	I0731 20:10:12.048159 1101872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:10:12.057645 1101872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:10:12.057709 1101872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:10:12.066595 1101872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:10:12.074729 1101872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:10:12.074786 1101872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:10:12.084237 1101872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:10:12.092807 1101872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:10:12.092877 1101872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:10:12.101705 1101872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:10:12.109998 1101872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:10:12.110052 1101872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:10:12.118532 1101872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 20:10:12.166515 1101872 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 20:10:12.166652 1101872 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 20:10:12.281406 1101872 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 20:10:12.281542 1101872 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 20:10:12.281691 1101872 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 20:10:12.470322 1101872 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 20:10:12.600577 1101872 out.go:204]   - Generating certificates and keys ...
	I0731 20:10:12.600746 1101872 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 20:10:12.600863 1101872 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 20:10:12.720304 1101872 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 20:10:12.867722 1101872 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 20:10:12.917204 1101872 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 20:10:13.172722 1101872 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 20:10:13.501957 1101872 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 20:10:13.502177 1101872 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-877061 localhost] and IPs [192.168.39.25 127.0.0.1 ::1]
	I0731 20:10:13.662307 1101872 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 20:10:13.662468 1101872 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-877061 localhost] and IPs [192.168.39.25 127.0.0.1 ::1]
	I0731 20:10:13.939212 1101872 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 20:10:14.057633 1101872 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 20:10:14.120202 1101872 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 20:10:14.120427 1101872 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 20:10:14.293872 1101872 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 20:10:14.364956 1101872 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 20:10:14.552445 1101872 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 20:10:14.706753 1101872 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 20:10:15.017164 1101872 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 20:10:15.017602 1101872 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 20:10:15.019765 1101872 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 20:10:15.021675 1101872 out.go:204]   - Booting up control plane ...
	I0731 20:10:15.021758 1101872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 20:10:15.021823 1101872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 20:10:15.021884 1101872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 20:10:15.051411 1101872 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 20:10:15.052470 1101872 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 20:10:15.052526 1101872 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 20:10:15.168516 1101872 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 20:10:15.168642 1101872 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 20:10:15.668357 1101872 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.789824ms
	I0731 20:10:15.668491 1101872 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 20:10:20.668211 1101872 kubeadm.go:310] [api-check] The API server is healthy after 5.002291697s
	I0731 20:10:20.681456 1101872 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 20:10:20.696925 1101872 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 20:10:20.724773 1101872 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 20:10:20.724950 1101872 kubeadm.go:310] [mark-control-plane] Marking the node addons-877061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 20:10:20.742563 1101872 kubeadm.go:310] [bootstrap-token] Using token: my6dzf.f6910kd3utos5wxr
	I0731 20:10:20.743904 1101872 out.go:204]   - Configuring RBAC rules ...
	I0731 20:10:20.744003 1101872 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 20:10:20.751418 1101872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 20:10:20.763084 1101872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 20:10:20.767018 1101872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 20:10:20.774050 1101872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 20:10:20.779355 1101872 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 20:10:21.074281 1101872 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 20:10:21.519525 1101872 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 20:10:22.074237 1101872 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 20:10:22.075074 1101872 kubeadm.go:310] 
	I0731 20:10:22.075137 1101872 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 20:10:22.075145 1101872 kubeadm.go:310] 
	I0731 20:10:22.075245 1101872 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 20:10:22.075259 1101872 kubeadm.go:310] 
	I0731 20:10:22.075284 1101872 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 20:10:22.075339 1101872 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 20:10:22.075399 1101872 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 20:10:22.075409 1101872 kubeadm.go:310] 
	I0731 20:10:22.075479 1101872 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 20:10:22.075489 1101872 kubeadm.go:310] 
	I0731 20:10:22.075547 1101872 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 20:10:22.075556 1101872 kubeadm.go:310] 
	I0731 20:10:22.075625 1101872 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 20:10:22.075713 1101872 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 20:10:22.075811 1101872 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 20:10:22.075821 1101872 kubeadm.go:310] 
	I0731 20:10:22.075933 1101872 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 20:10:22.076021 1101872 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 20:10:22.076027 1101872 kubeadm.go:310] 
	I0731 20:10:22.076114 1101872 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token my6dzf.f6910kd3utos5wxr \
	I0731 20:10:22.076255 1101872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 20:10:22.076276 1101872 kubeadm.go:310] 	--control-plane 
	I0731 20:10:22.076281 1101872 kubeadm.go:310] 
	I0731 20:10:22.076350 1101872 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 20:10:22.076388 1101872 kubeadm.go:310] 
	I0731 20:10:22.076474 1101872 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token my6dzf.f6910kd3utos5wxr \
	I0731 20:10:22.076585 1101872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 20:10:22.077153 1101872 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 20:10:22.077185 1101872 cni.go:84] Creating CNI manager for ""
	I0731 20:10:22.077196 1101872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:10:22.079155 1101872 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:10:22.080701 1101872 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:10:22.090791 1101872 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:10:22.107039 1101872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:10:22.107106 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:22.107152 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-877061 minikube.k8s.io/updated_at=2024_07_31T20_10_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=addons-877061 minikube.k8s.io/primary=true
	I0731 20:10:22.144838 1101872 ops.go:34] apiserver oom_adj: -16
	I0731 20:10:22.213504 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:22.713992 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:23.214085 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:23.713470 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:24.213082 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:24.713117 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:25.214070 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:25.713581 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:26.213066 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:26.714102 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:27.213707 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:27.713502 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:28.213359 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:28.713712 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:29.213844 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:29.713167 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:30.213993 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:30.714004 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:31.213100 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:31.713975 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:32.213565 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:32.713192 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:33.213948 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:33.713252 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:34.213394 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:34.714131 1101872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:10:34.800756 1101872 kubeadm.go:1113] duration metric: took 12.693713349s to wait for elevateKubeSystemPrivileges
	I0731 20:10:34.800800 1101872 kubeadm.go:394] duration metric: took 22.816013892s to StartCluster
	I0731 20:10:34.800828 1101872 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:34.800997 1101872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:10:34.801388 1101872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:10:34.801593 1101872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 20:10:34.801623 1101872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:10:34.801709 1101872 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 20:10:34.801833 1101872 addons.go:69] Setting helm-tiller=true in profile "addons-877061"
	I0731 20:10:34.801856 1101872 addons.go:69] Setting yakd=true in profile "addons-877061"
	I0731 20:10:34.801864 1101872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-877061"
	I0731 20:10:34.801891 1101872 addons.go:234] Setting addon helm-tiller=true in "addons-877061"
	I0731 20:10:34.801891 1101872 config.go:182] Loaded profile config "addons-877061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:10:34.801898 1101872 addons.go:69] Setting ingress=true in profile "addons-877061"
	I0731 20:10:34.801901 1101872 addons.go:69] Setting default-storageclass=true in profile "addons-877061"
	I0731 20:10:34.801915 1101872 addons.go:234] Setting addon ingress=true in "addons-877061"
	I0731 20:10:34.801890 1101872 addons.go:234] Setting addon yakd=true in "addons-877061"
	I0731 20:10:34.801949 1101872 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-877061"
	I0731 20:10:34.801955 1101872 addons.go:69] Setting registry=true in profile "addons-877061"
	I0731 20:10:34.801957 1101872 addons.go:69] Setting inspektor-gadget=true in profile "addons-877061"
	I0731 20:10:34.801962 1101872 addons.go:69] Setting storage-provisioner=true in profile "addons-877061"
	I0731 20:10:34.801975 1101872 addons.go:234] Setting addon registry=true in "addons-877061"
	I0731 20:10:34.802004 1101872 addons.go:234] Setting addon storage-provisioner=true in "addons-877061"
	I0731 20:10:34.802015 1101872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-877061"
	I0731 20:10:34.802025 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.801955 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.802047 1101872 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-877061"
	I0731 20:10:34.802069 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.801976 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.802028 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.801942 1101872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-877061"
	I0731 20:10:34.801942 1101872 addons.go:69] Setting gcp-auth=true in profile "addons-877061"
	I0731 20:10:34.802526 1101872 mustload.go:65] Loading cluster: addons-877061
	I0731 20:10:34.802528 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802550 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802582 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802583 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802596 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.802605 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802615 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802616 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802624 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802665 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.801876 1101872 addons.go:69] Setting cloud-spanner=true in profile "addons-877061"
	I0731 20:10:34.801985 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.802693 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.802700 1101872 config.go:182] Loaded profile config "addons-877061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:10:34.802712 1101872 addons.go:234] Setting addon cloud-spanner=true in "addons-877061"
	I0731 20:10:34.802560 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.801936 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.801988 1101872 addons.go:69] Setting metrics-server=true in profile "addons-877061"
	I0731 20:10:34.802793 1101872 addons.go:234] Setting addon metrics-server=true in "addons-877061"
	I0731 20:10:34.803030 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.801998 1101872 addons.go:69] Setting volumesnapshots=true in profile "addons-877061"
	I0731 20:10:34.802000 1101872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-877061"
	I0731 20:10:34.801977 1101872 addons.go:234] Setting addon inspektor-gadget=true in "addons-877061"
	I0731 20:10:34.803087 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.803145 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.803151 1101872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-877061"
	I0731 20:10:34.801998 1101872 addons.go:69] Setting volcano=true in profile "addons-877061"
	I0731 20:10:34.803567 1101872 addons.go:234] Setting addon volcano=true in "addons-877061"
	I0731 20:10:34.803601 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.803624 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.803657 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.803827 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.803853 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.804002 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.803107 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.804029 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.801985 1101872 addons.go:69] Setting ingress-dns=true in profile "addons-877061"
	I0731 20:10:34.804078 1101872 addons.go:234] Setting addon ingress-dns=true in "addons-877061"
	I0731 20:10:34.803117 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.804133 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.804451 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.804498 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.804502 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.804539 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.803119 1101872 addons.go:234] Setting addon volumesnapshots=true in "addons-877061"
	I0731 20:10:34.804705 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.803156 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.804775 1101872 out.go:177] * Verifying Kubernetes components...
	I0731 20:10:34.803092 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.804936 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.803183 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.808379 1101872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:10:34.824315 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0731 20:10:34.824436 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0731 20:10:34.824444 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0731 20:10:34.824510 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38357
	I0731 20:10:34.824992 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.825138 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.825152 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.825539 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.825775 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.825794 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.825794 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.825811 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.825927 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.825937 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.826137 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.826208 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0731 20:10:34.826448 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.826460 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.826536 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.826603 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.826624 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.826851 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.826965 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.827003 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.827015 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.827217 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.827264 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.827337 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.827408 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.827444 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.831524 1101872 addons.go:234] Setting addon default-storageclass=true in "addons-877061"
	I0731 20:10:34.831575 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.831961 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.832008 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.832509 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.832539 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.832644 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.832664 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.832935 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.832971 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.834483 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.834528 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.846149 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I0731 20:10:34.846152 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41175
	I0731 20:10:34.846637 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.846669 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.847062 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.847082 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.847549 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.847571 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.848637 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.849487 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.849528 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.851946 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I0731 20:10:34.852439 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.852608 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.853153 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.853176 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.853220 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.853254 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.854200 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0731 20:10:34.856424 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.857433 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.857452 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.858246 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.858561 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.861136 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.861767 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.861815 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.865927 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.868328 1101872 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 20:10:34.868922 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I0731 20:10:34.869464 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.869809 1101872 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 20:10:34.869830 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 20:10:34.869853 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.870209 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.870229 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.870313 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0731 20:10:34.870571 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.871408 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.871457 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.872753 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.873502 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.873520 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.873585 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.874064 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.874104 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.874191 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.874267 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
	I0731 20:10:34.874275 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.874460 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.874664 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.874751 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.874837 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.874886 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.875337 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.875354 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.875436 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0731 20:10:34.875985 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.876167 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.876316 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.877298 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.877318 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.877569 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0731 20:10:34.878198 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.878292 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.878791 1101872 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-877061"
	I0731 20:10:34.878842 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.879221 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.879261 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.879305 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.879321 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.879799 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.880029 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.880453 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.880494 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.880701 1101872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:10:34.882933 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0731 20:10:34.882950 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0731 20:10:34.883518 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.883547 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.883775 1101872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:10:34.883795 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:10:34.883814 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.884581 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.884597 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.885297 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.885464 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.887056 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.887078 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.887642 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.887972 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.888043 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.889133 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.889304 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0731 20:10:34.889572 1101872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 20:10:34.889596 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.889626 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.889790 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.889863 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.890114 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.890295 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.890438 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.890450 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.890654 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.890964 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.891506 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.891544 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.891633 1101872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 20:10:34.892741 1101872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 20:10:34.892766 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.892808 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.893933 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:34.894131 1101872 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 20:10:34.894147 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 20:10:34.894168 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.894302 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.894343 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.897603 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.898177 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.898209 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.898431 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.898764 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.898945 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.899089 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.899528 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0731 20:10:34.899964 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.900650 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.900670 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.901030 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.901571 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.901607 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.911387 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
	I0731 20:10:34.911972 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.912638 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.912660 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.913158 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.913846 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.913888 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.915604 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0731 20:10:34.916362 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.916996 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.917015 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.917744 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0731 20:10:34.918250 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.918847 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.918864 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.919261 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.919322 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
	I0731 20:10:34.920120 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.920160 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.920382 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.920948 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.920970 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.921037 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.921591 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.921670 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.922902 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0731 20:10:34.923053 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.924892 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.925024 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I0731 20:10:34.927120 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0731 20:10:34.927134 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.927259 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.927302 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.927315 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.927379 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I0731 20:10:34.928294 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.928322 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.928402 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.928465 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.928794 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:34.928808 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:34.928881 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.930243 1101872 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 20:10:34.930624 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I0731 20:10:34.930788 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.930893 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.930970 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.931020 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:34.931044 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:34.931052 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:34.931061 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:34.931068 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:34.931200 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.931250 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:34.931257 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 20:10:34.931351 1101872 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 20:10:34.931494 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.931567 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.931620 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 20:10:34.931635 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 20:10:34.931659 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.932442 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.932459 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.933222 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.933239 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.933350 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.933885 1101872 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 20:10:34.934036 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:34.934068 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:34.934539 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.934759 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.934819 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.934854 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I0731 20:10:34.936351 1101872 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 20:10:34.936690 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.936713 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.936756 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.936887 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.937425 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.937443 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.937511 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.937624 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.937693 1101872 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 20:10:34.937707 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 20:10:34.937726 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.937770 1101872 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:10:34.937798 1101872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:10:34.937816 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.938416 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.938433 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.938501 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.938556 1101872 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 20:10:34.938627 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.938930 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.938960 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.939243 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.940620 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0731 20:10:34.940736 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.941206 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.941254 1101872 out.go:177]   - Using image docker.io/busybox:stable
	I0731 20:10:34.941295 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.941502 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.941915 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.941931 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.942073 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.942220 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.942241 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0731 20:10:34.942242 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.942467 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.942581 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.942658 1101872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 20:10:34.942673 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 20:10:34.942692 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.942776 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.942942 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.942992 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.943026 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.943161 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.943476 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.943509 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.943814 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.943833 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.944352 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.944371 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.944622 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.944918 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.945114 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.945533 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.946178 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.946186 1101872 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 20:10:34.946395 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.946643 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.947552 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 20:10:34.947575 1101872 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 20:10:34.947593 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.947658 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.947670 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0731 20:10:34.948064 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.948081 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.948245 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 20:10:34.948312 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.948413 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.948591 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.948686 1101872 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 20:10:34.948760 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.949208 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.949229 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.949724 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0731 20:10:34.949728 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.949912 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.950288 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.950404 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 20:10:34.950594 1101872 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 20:10:34.950608 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 20:10:34.950624 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.950667 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.950679 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.950936 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.951019 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.952537 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.952633 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 20:10:34.953417 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.954127 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.954183 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.954603 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.954634 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.954776 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.954882 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.955094 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.955252 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 20:10:34.955259 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.955313 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.955336 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.955340 1101872 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 20:10:34.955500 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.955519 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.955988 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.956166 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.956270 1101872 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 20:10:34.956305 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.956911 1101872 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 20:10:34.956929 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 20:10:34.956948 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.957885 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 20:10:34.957896 1101872 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 20:10:34.957966 1101872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 20:10:34.957986 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.960103 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 20:10:34.960325 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.960748 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.960770 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.960942 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.961135 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.961327 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.961473 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.961482 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.961879 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.961903 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.962127 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.962295 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.962413 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 20:10:34.962481 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.962590 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.964624 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 20:10:34.965834 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 20:10:34.965855 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 20:10:34.965876 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.968669 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.968989 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.969009 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.969185 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.969372 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.969446 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43981
	I0731 20:10:34.969688 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.969816 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.970070 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:34.970593 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:34.970607 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:34.970951 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:34.971135 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:34.972807 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:34.974257 1101872 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 20:10:34.975585 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 20:10:34.975600 1101872 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 20:10:34.975614 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:34.978691 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.979058 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:34.979073 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:34.979223 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:34.979377 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:34.979499 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:34.979598 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:34.982910 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0731 20:10:35.000636 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:35.001235 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:35.001258 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:35.001695 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:35.001920 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:35.003800 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:35.006031 1101872 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 20:10:35.007407 1101872 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 20:10:35.007427 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 20:10:35.007445 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:35.010111 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:35.010623 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:35.010654 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:35.010849 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:35.011043 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:35.011239 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:35.011407 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:35.206025 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:10:35.275744 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 20:10:35.291417 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 20:10:35.307207 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:10:35.323006 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 20:10:35.362034 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 20:10:35.362068 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 20:10:35.375768 1101872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:10:35.375922 1101872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 20:10:35.378301 1101872 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 20:10:35.378324 1101872 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 20:10:35.390060 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 20:10:35.417560 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 20:10:35.436447 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 20:10:35.436474 1101872 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 20:10:35.470342 1101872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 20:10:35.470372 1101872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 20:10:35.494808 1101872 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 20:10:35.494834 1101872 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 20:10:35.506245 1101872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 20:10:35.506266 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 20:10:35.513379 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 20:10:35.513408 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 20:10:35.533202 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 20:10:35.533228 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 20:10:35.546389 1101872 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 20:10:35.546412 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 20:10:35.584171 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 20:10:35.584204 1101872 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 20:10:35.630596 1101872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 20:10:35.630627 1101872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 20:10:35.654055 1101872 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 20:10:35.654101 1101872 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 20:10:35.673251 1101872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 20:10:35.673283 1101872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 20:10:35.700468 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 20:10:35.700497 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 20:10:35.730896 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 20:10:35.730923 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 20:10:35.737789 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 20:10:35.772021 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 20:10:35.772058 1101872 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 20:10:35.797288 1101872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 20:10:35.797321 1101872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 20:10:35.819564 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 20:10:35.820861 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 20:10:35.820881 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 20:10:35.863043 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 20:10:35.863071 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 20:10:35.864240 1101872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:10:35.864280 1101872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 20:10:35.939626 1101872 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 20:10:35.939649 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 20:10:35.948393 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 20:10:35.948420 1101872 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 20:10:35.972239 1101872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 20:10:35.972269 1101872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 20:10:36.046755 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 20:10:36.046791 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 20:10:36.056899 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:10:36.132032 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 20:10:36.151365 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 20:10:36.151392 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 20:10:36.172186 1101872 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 20:10:36.172213 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 20:10:36.314542 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 20:10:36.314573 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 20:10:36.398708 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 20:10:36.398740 1101872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 20:10:36.437186 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 20:10:36.540606 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 20:10:36.540646 1101872 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 20:10:36.647151 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 20:10:36.647179 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 20:10:36.678061 1101872 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 20:10:36.678088 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 20:10:36.889673 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 20:10:36.889701 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 20:10:36.932109 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 20:10:37.074386 1101872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 20:10:37.074426 1101872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 20:10:37.247024 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 20:10:39.233551 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.027483901s)
	I0731 20:10:39.233631 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.233644 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.233652 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.957871043s)
	I0731 20:10:39.233708 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.233725 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.234104 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.234107 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.234149 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.234108 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.234163 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.234174 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.234182 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.234193 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.234202 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.234206 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.234534 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.234608 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.234574 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.234582 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.236047 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.234585 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.320600 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.029107802s)
	I0731 20:10:39.320628 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.013372899s)
	I0731 20:10:39.320665 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.320677 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.320679 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.320692 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.320713 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.997673039s)
	I0731 20:10:39.320762 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.320778 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.320798 1101872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.944826611s)
	I0731 20:10:39.320819 1101872 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 20:10:39.320877 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.930786705s)
	I0731 20:10:39.320909 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.320919 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.321312 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.321335 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.321366 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.321372 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.321379 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.321386 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.321448 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.321454 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.321462 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.321468 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.321808 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.321851 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.321858 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.321866 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.321873 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.321935 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.321945 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.321953 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.321961 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.322100 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.322136 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.322152 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.322363 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.322404 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.322420 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.322997 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.323028 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.323034 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.323347 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.323364 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.324545 1101872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.945006583s)
	I0731 20:10:39.325504 1101872 node_ready.go:35] waiting up to 6m0s for node "addons-877061" to be "Ready" ...
	I0731 20:10:39.384515 1101872 node_ready.go:49] node "addons-877061" has status "Ready":"True"
	I0731 20:10:39.384542 1101872 node_ready.go:38] duration metric: took 59.010062ms for node "addons-877061" to be "Ready" ...
	I0731 20:10:39.384554 1101872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:10:39.441153 1101872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:39.472252 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.472281 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.472628 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.472652 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.472674 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	W0731 20:10:39.472780 1101872 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 20:10:39.486602 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:39.486633 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:39.486962 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:39.486990 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:39.487000 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:39.866253 1101872 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-877061" context rescaled to 1 replicas
	I0731 20:10:41.455134 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:41.957494 1101872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 20:10:41.957547 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:41.960881 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:41.961363 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:41.961397 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:41.961662 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:41.961991 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:41.962223 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:41.962422 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:42.226345 1101872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 20:10:42.288444 1101872 addons.go:234] Setting addon gcp-auth=true in "addons-877061"
	I0731 20:10:42.288512 1101872 host.go:66] Checking if "addons-877061" exists ...
	I0731 20:10:42.288883 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:42.288922 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:42.306365 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I0731 20:10:42.306945 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:42.307581 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:42.307611 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:42.307981 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:42.308541 1101872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:10:42.308574 1101872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:10:42.323954 1101872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45439
	I0731 20:10:42.324437 1101872 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:10:42.325001 1101872 main.go:141] libmachine: Using API Version  1
	I0731 20:10:42.325026 1101872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:10:42.325410 1101872 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:10:42.325648 1101872 main.go:141] libmachine: (addons-877061) Calling .GetState
	I0731 20:10:42.327415 1101872 main.go:141] libmachine: (addons-877061) Calling .DriverName
	I0731 20:10:42.327663 1101872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 20:10:42.327695 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHHostname
	I0731 20:10:42.330313 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:42.330748 1101872 main.go:141] libmachine: (addons-877061) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:19:b6", ip: ""} in network mk-addons-877061: {Iface:virbr1 ExpiryTime:2024-07-31 21:09:55 +0000 UTC Type:0 Mac:52:54:00:2c:19:b6 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:addons-877061 Clientid:01:52:54:00:2c:19:b6}
	I0731 20:10:42.330788 1101872 main.go:141] libmachine: (addons-877061) DBG | domain addons-877061 has defined IP address 192.168.39.25 and MAC address 52:54:00:2c:19:b6 in network mk-addons-877061
	I0731 20:10:42.330918 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHPort
	I0731 20:10:42.331117 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHKeyPath
	I0731 20:10:42.331301 1101872 main.go:141] libmachine: (addons-877061) Calling .GetSSHUsername
	I0731 20:10:42.331471 1101872 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/addons-877061/id_rsa Username:docker}
	I0731 20:10:42.733470 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.315869653s)
	I0731 20:10:42.733525 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733536 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.733601 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.995763645s)
	I0731 20:10:42.733656 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.914059162s)
	I0731 20:10:42.733656 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733723 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.733747 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.676816179s)
	I0731 20:10:42.733680 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733775 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733784 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.733806 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.733853 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.601780478s)
	I0731 20:10:42.733887 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.733898 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.734119 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.734146 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.734157 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.734166 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.734426 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.297201841s)
	W0731 20:10:42.734463 1101872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 20:10:42.734502 1101872 retry.go:31] will retry after 177.691158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 20:10:42.734587 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.802444493s)
	I0731 20:10:42.734613 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.734622 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.734754 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.734768 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.734779 1101872 addons.go:475] Verifying addon ingress=true in "addons-877061"
	I0731 20:10:42.735032 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.735233 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.735257 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.735297 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.735318 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.735328 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.735628 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.735654 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.735660 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.735667 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.735675 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.735719 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.735738 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.735744 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.735753 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.735759 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.736571 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.736622 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.736642 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.736655 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.736664 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.736726 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.736750 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.736757 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.736766 1101872 addons.go:475] Verifying addon metrics-server=true in "addons-877061"
	I0731 20:10:42.737079 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.737134 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.737145 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.737563 1101872 out.go:177] * Verifying ingress addon...
	I0731 20:10:42.737751 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.737772 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.737774 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.737786 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:42.737800 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:42.737824 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.737838 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.737846 1101872 addons.go:475] Verifying addon registry=true in "addons-877061"
	I0731 20:10:42.738058 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.738128 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:42.738164 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.738177 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:42.738183 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.738191 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:42.739026 1101872 out.go:177] * Verifying registry addon...
	I0731 20:10:42.740182 1101872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 20:10:42.740291 1101872 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-877061 service yakd-dashboard -n yakd-dashboard
	
	I0731 20:10:42.741836 1101872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 20:10:42.748060 1101872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 20:10:42.748077 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:42.755534 1101872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 20:10:42.755554 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:42.912435 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 20:10:43.245506 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:43.246572 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:43.744451 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:43.750483 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:43.955296 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:44.253768 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.006670024s)
	I0731 20:10:44.253845 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:44.253868 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:44.253849 1101872 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.92615674s)
	I0731 20:10:44.254168 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:44.254180 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:44.254194 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:44.254212 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:44.254224 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:44.254464 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:44.254478 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:44.254489 1101872 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-877061"
	I0731 20:10:44.255658 1101872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 20:10:44.255664 1101872 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 20:10:44.257538 1101872 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 20:10:44.258294 1101872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 20:10:44.258728 1101872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 20:10:44.258744 1101872 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 20:10:44.266417 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:44.280755 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:44.291380 1101872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 20:10:44.291404 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:44.389125 1101872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 20:10:44.389152 1101872 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 20:10:44.447382 1101872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 20:10:44.447407 1101872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 20:10:44.508995 1101872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 20:10:44.672880 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.760377894s)
	I0731 20:10:44.672961 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:44.672977 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:44.673379 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:44.673400 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:44.673411 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:44.673419 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:44.673661 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:44.673680 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:44.748085 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:44.750160 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:44.769604 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:45.244752 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:45.253543 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:45.264369 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:45.648523 1101872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.139477809s)
	I0731 20:10:45.648598 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:45.648616 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:45.648950 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:45.648987 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:45.649003 1101872 main.go:141] libmachine: Making call to close driver server
	I0731 20:10:45.649011 1101872 main.go:141] libmachine: (addons-877061) Calling .Close
	I0731 20:10:45.649250 1101872 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:10:45.649292 1101872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:10:45.649312 1101872 main.go:141] libmachine: (addons-877061) DBG | Closing plugin on server side
	I0731 20:10:45.650778 1101872 addons.go:475] Verifying addon gcp-auth=true in "addons-877061"
	I0731 20:10:45.652756 1101872 out.go:177] * Verifying gcp-auth addon...
	I0731 20:10:45.654735 1101872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 20:10:45.667961 1101872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 20:10:45.667986 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:45.744884 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:45.750142 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:45.764475 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:46.158722 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:46.243598 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:46.246705 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:46.263545 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:46.451724 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:46.659319 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:46.746729 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:46.746852 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:46.767264 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:47.158978 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:47.245050 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:47.246785 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:47.461532 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:47.658468 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:47.744931 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:47.746257 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:47.764478 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:48.158171 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:48.247089 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:48.249236 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:48.264916 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:48.658976 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:48.746388 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:48.746713 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:48.764855 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:48.946965 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:49.158373 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:49.245160 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:49.246524 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:49.265453 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:49.700349 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:49.746807 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:49.747742 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:49.768811 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:50.157859 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:50.245658 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:50.248061 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:50.263860 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:50.658387 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:50.750863 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:50.751578 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:50.773202 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:50.948543 1101872 pod_ready.go:102] pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace has status "Ready":"False"
	I0731 20:10:51.159634 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:51.244475 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:51.246721 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:51.263971 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:51.445434 1101872 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-fw2p8" not found
	I0731 20:10:51.445471 1101872 pod_ready.go:81] duration metric: took 12.004284155s for pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace to be "Ready" ...
	E0731 20:10:51.445487 1101872 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-fw2p8" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-fw2p8" not found
	I0731 20:10:51.445509 1101872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pjvjp" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.450598 1101872 pod_ready.go:92] pod "coredns-7db6d8ff4d-pjvjp" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.450624 1101872 pod_ready.go:81] duration metric: took 5.101582ms for pod "coredns-7db6d8ff4d-pjvjp" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.450634 1101872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.455976 1101872 pod_ready.go:92] pod "etcd-addons-877061" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.455998 1101872 pod_ready.go:81] duration metric: took 5.356211ms for pod "etcd-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.456007 1101872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.461238 1101872 pod_ready.go:92] pod "kube-apiserver-addons-877061" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.461258 1101872 pod_ready.go:81] duration metric: took 5.244109ms for pod "kube-apiserver-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.461269 1101872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.466666 1101872 pod_ready.go:92] pod "kube-controller-manager-addons-877061" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.466684 1101872 pod_ready.go:81] duration metric: took 5.409103ms for pod "kube-controller-manager-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.466695 1101872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h92bj" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.651562 1101872 pod_ready.go:92] pod "kube-proxy-h92bj" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:51.651593 1101872 pod_ready.go:81] duration metric: took 184.890923ms for pod "kube-proxy-h92bj" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.651608 1101872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:51.658127 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:51.745127 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:51.746805 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:51.764894 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:52.044940 1101872 pod_ready.go:92] pod "kube-scheduler-addons-877061" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:52.044970 1101872 pod_ready.go:81] duration metric: took 393.352713ms for pod "kube-scheduler-addons-877061" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:52.044984 1101872 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5kbf8" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:52.157999 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:52.245237 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:52.246347 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:52.263144 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:52.444558 1101872 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-5kbf8" in "kube-system" namespace has status "Ready":"True"
	I0731 20:10:52.444584 1101872 pod_ready.go:81] duration metric: took 399.592841ms for pod "nvidia-device-plugin-daemonset-5kbf8" in "kube-system" namespace to be "Ready" ...
	I0731 20:10:52.444605 1101872 pod_ready.go:38] duration metric: took 13.060030069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:10:52.444623 1101872 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:10:52.444680 1101872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:10:52.461760 1101872 api_server.go:72] duration metric: took 17.660094129s to wait for apiserver process to appear ...
	I0731 20:10:52.461795 1101872 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:10:52.461834 1101872 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0731 20:10:52.466781 1101872 api_server.go:279] https://192.168.39.25:8443/healthz returned 200:
	ok
	I0731 20:10:52.467778 1101872 api_server.go:141] control plane version: v1.30.3
	I0731 20:10:52.467807 1101872 api_server.go:131] duration metric: took 6.005109ms to wait for apiserver health ...
	I0731 20:10:52.467817 1101872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:10:52.652496 1101872 system_pods.go:59] 18 kube-system pods found
	I0731 20:10:52.652532 1101872 system_pods.go:61] "coredns-7db6d8ff4d-pjvjp" [e01b9e3f-5d75-4f28-bef3-a1160ea25c49] Running
	I0731 20:10:52.652544 1101872 system_pods.go:61] "csi-hostpath-attacher-0" [faf92bf1-8436-4e5f-812b-2d8ee7be78f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 20:10:52.652555 1101872 system_pods.go:61] "csi-hostpath-resizer-0" [66792cd1-a930-47fe-aba7-0e628cbf832c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 20:10:52.652566 1101872 system_pods.go:61] "csi-hostpathplugin-w6w49" [85ac230e-8509-454a-a821-35db1c0791a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 20:10:52.652577 1101872 system_pods.go:61] "etcd-addons-877061" [c4d67bbf-58e0-4d6a-a64d-80504f04c202] Running
	I0731 20:10:52.652582 1101872 system_pods.go:61] "kube-apiserver-addons-877061" [88683965-c027-49db-b09d-ebcd761edde0] Running
	I0731 20:10:52.652585 1101872 system_pods.go:61] "kube-controller-manager-addons-877061" [2e12f940-8ee6-46c7-b124-b87c822a8116] Running
	I0731 20:10:52.652591 1101872 system_pods.go:61] "kube-ingress-dns-minikube" [df8e5ae0-bd13-4bca-a087-107e89be68cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0731 20:10:52.652598 1101872 system_pods.go:61] "kube-proxy-h92bj" [8dac7096-4089-4931-8b7d-506f46fa30aa] Running
	I0731 20:10:52.652602 1101872 system_pods.go:61] "kube-scheduler-addons-877061" [153883b8-84c7-48cc-a5ef-f0bc34d4fdb4] Running
	I0731 20:10:52.652610 1101872 system_pods.go:61] "metrics-server-c59844bb4-szt4w" [815a74e0-c39f-4673-8b08-290908785d21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:10:52.652613 1101872 system_pods.go:61] "nvidia-device-plugin-daemonset-5kbf8" [c837ef00-57b2-4111-8588-1b47358c0549] Running
	I0731 20:10:52.652621 1101872 system_pods.go:61] "registry-698f998955-pgf2q" [40e9667a-bd97-42a3-bb45-e40bc6e3b530] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 20:10:52.652627 1101872 system_pods.go:61] "registry-proxy-cdmns" [ec3040a1-3e1e-4ba3-9242-35e9ce417ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 20:10:52.652638 1101872 system_pods.go:61] "snapshot-controller-745499f584-2jq5t" [e85349e6-8af3-456c-b244-8e0916f824d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 20:10:52.652651 1101872 system_pods.go:61] "snapshot-controller-745499f584-kc6dc" [518b68fb-1e49-48af-862e-7907a32ba284] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 20:10:52.652661 1101872 system_pods.go:61] "storage-provisioner" [0edee967-79b7-490d-baf7-7412a25fc2c7] Running
	I0731 20:10:52.652672 1101872 system_pods.go:61] "tiller-deploy-6677d64bcd-7dwjf" [b2e84403-dfb7-4445-83e9-f9864386e974] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 20:10:52.652684 1101872 system_pods.go:74] duration metric: took 184.859472ms to wait for pod list to return data ...
	I0731 20:10:52.652695 1101872 default_sa.go:34] waiting for default service account to be created ...
	I0731 20:10:52.658249 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:52.747102 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:52.749236 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:52.771215 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:52.844988 1101872 default_sa.go:45] found service account: "default"
	I0731 20:10:52.845016 1101872 default_sa.go:55] duration metric: took 192.311468ms for default service account to be created ...
	I0731 20:10:52.845025 1101872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 20:10:53.050549 1101872 system_pods.go:86] 18 kube-system pods found
	I0731 20:10:53.050582 1101872 system_pods.go:89] "coredns-7db6d8ff4d-pjvjp" [e01b9e3f-5d75-4f28-bef3-a1160ea25c49] Running
	I0731 20:10:53.050591 1101872 system_pods.go:89] "csi-hostpath-attacher-0" [faf92bf1-8436-4e5f-812b-2d8ee7be78f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 20:10:53.050598 1101872 system_pods.go:89] "csi-hostpath-resizer-0" [66792cd1-a930-47fe-aba7-0e628cbf832c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 20:10:53.050607 1101872 system_pods.go:89] "csi-hostpathplugin-w6w49" [85ac230e-8509-454a-a821-35db1c0791a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 20:10:53.050613 1101872 system_pods.go:89] "etcd-addons-877061" [c4d67bbf-58e0-4d6a-a64d-80504f04c202] Running
	I0731 20:10:53.050617 1101872 system_pods.go:89] "kube-apiserver-addons-877061" [88683965-c027-49db-b09d-ebcd761edde0] Running
	I0731 20:10:53.050621 1101872 system_pods.go:89] "kube-controller-manager-addons-877061" [2e12f940-8ee6-46c7-b124-b87c822a8116] Running
	I0731 20:10:53.050628 1101872 system_pods.go:89] "kube-ingress-dns-minikube" [df8e5ae0-bd13-4bca-a087-107e89be68cd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0731 20:10:53.050633 1101872 system_pods.go:89] "kube-proxy-h92bj" [8dac7096-4089-4931-8b7d-506f46fa30aa] Running
	I0731 20:10:53.050638 1101872 system_pods.go:89] "kube-scheduler-addons-877061" [153883b8-84c7-48cc-a5ef-f0bc34d4fdb4] Running
	I0731 20:10:53.050644 1101872 system_pods.go:89] "metrics-server-c59844bb4-szt4w" [815a74e0-c39f-4673-8b08-290908785d21] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:10:53.050649 1101872 system_pods.go:89] "nvidia-device-plugin-daemonset-5kbf8" [c837ef00-57b2-4111-8588-1b47358c0549] Running
	I0731 20:10:53.050655 1101872 system_pods.go:89] "registry-698f998955-pgf2q" [40e9667a-bd97-42a3-bb45-e40bc6e3b530] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 20:10:53.050660 1101872 system_pods.go:89] "registry-proxy-cdmns" [ec3040a1-3e1e-4ba3-9242-35e9ce417ec0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 20:10:53.050667 1101872 system_pods.go:89] "snapshot-controller-745499f584-2jq5t" [e85349e6-8af3-456c-b244-8e0916f824d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 20:10:53.050677 1101872 system_pods.go:89] "snapshot-controller-745499f584-kc6dc" [518b68fb-1e49-48af-862e-7907a32ba284] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 20:10:53.050681 1101872 system_pods.go:89] "storage-provisioner" [0edee967-79b7-490d-baf7-7412a25fc2c7] Running
	I0731 20:10:53.050687 1101872 system_pods.go:89] "tiller-deploy-6677d64bcd-7dwjf" [b2e84403-dfb7-4445-83e9-f9864386e974] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 20:10:53.050695 1101872 system_pods.go:126] duration metric: took 205.66429ms to wait for k8s-apps to be running ...
	I0731 20:10:53.050706 1101872 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 20:10:53.050752 1101872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:10:53.066741 1101872 system_svc.go:56] duration metric: took 16.022805ms WaitForService to wait for kubelet
	I0731 20:10:53.066780 1101872 kubeadm.go:582] duration metric: took 18.265119683s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:10:53.066804 1101872 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:10:53.157895 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:53.246662 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:53.247302 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:53.247725 1101872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:10:53.247748 1101872 node_conditions.go:123] node cpu capacity is 2
	I0731 20:10:53.247763 1101872 node_conditions.go:105] duration metric: took 180.953959ms to run NodePressure ...
	I0731 20:10:53.247779 1101872 start.go:241] waiting for startup goroutines ...
	I0731 20:10:53.262910 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:53.658703 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:53.745084 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:53.747575 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:53.765186 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:54.159350 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:54.709079 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:54.709243 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:54.709308 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:54.709888 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:54.745860 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:54.748268 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:54.765368 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:55.158628 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:55.245176 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:55.246557 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:55.263713 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:55.658362 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:55.747102 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:55.748035 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:55.764364 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:56.158713 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:56.245690 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:56.247228 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:56.266792 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:56.658516 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:56.744609 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:56.746814 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:56.765277 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:57.158666 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:57.245210 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:57.246826 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:57.264558 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:57.658844 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:57.744218 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:57.746386 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:57.768341 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:58.159466 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:58.252030 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:58.252459 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:58.264844 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:58.658484 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:58.744971 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:58.746548 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:58.765057 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:59.158940 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:59.244736 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:59.246832 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:59.262782 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:10:59.658613 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:10:59.744940 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:10:59.748919 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:10:59.769095 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:00.158977 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:00.247229 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:00.248676 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:00.265327 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:00.658971 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:00.745493 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:00.747175 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:00.765222 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:01.159101 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:01.245461 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:01.246218 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:01.263358 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:01.657978 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:01.745009 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:01.746839 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:01.764380 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:02.158648 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:02.246041 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:02.251194 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:02.263938 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:02.658648 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:02.797576 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:02.798990 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:02.800106 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:03.158754 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:03.244260 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:03.246827 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:03.262711 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:03.658912 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:03.744768 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:03.747516 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:03.770008 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:04.160556 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:04.244399 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:04.246458 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:04.264415 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:04.658134 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:04.746388 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:04.746669 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:04.772380 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:05.158345 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:05.245649 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:05.246437 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:05.264035 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:05.658300 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:05.744380 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:05.746793 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:05.768018 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:06.158743 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:06.244715 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:06.246976 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:06.263085 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:06.658895 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:06.745592 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:06.747282 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:06.770162 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:07.158351 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:07.246583 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:07.247733 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:07.263202 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:07.658540 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:07.748471 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:07.749325 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:07.767870 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:08.445054 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:08.445167 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:08.445899 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:08.446920 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:08.658681 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:08.744764 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:08.746657 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:08.766543 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:09.158154 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:09.245958 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:09.250100 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:09.264457 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:09.658631 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:09.744909 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:09.747041 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:09.762623 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:10.158126 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:10.245410 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:10.246436 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:10.264870 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:10.658924 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:10.745635 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:10.746675 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:10.766778 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:11.159972 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:11.246622 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:11.247384 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:11.263808 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:11.658257 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:11.744290 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:11.746468 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:11.765242 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:12.158888 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:12.245373 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:12.247537 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:12.263756 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:12.658112 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:12.745734 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:12.750770 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:12.763626 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:13.158684 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:13.244122 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:13.246949 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:13.263516 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:13.658954 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:13.746088 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:13.753847 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:13.769643 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:14.158247 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:14.245402 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:14.246535 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:14.263502 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:14.919831 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:14.921368 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:14.922147 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:14.923196 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:15.158672 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:15.244616 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:15.247048 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:15.263162 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:15.659709 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:15.745781 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:15.748281 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 20:11:15.764581 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:16.159088 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:16.245283 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:16.246854 1101872 kapi.go:107] duration metric: took 33.505013889s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 20:11:16.262989 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:16.658771 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:16.744930 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:16.768242 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:17.158096 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:17.245004 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:17.262688 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:17.658517 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:17.745313 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:17.765059 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:18.160224 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:18.244784 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:18.265278 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:18.658255 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:18.744855 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:18.765406 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:19.158098 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:19.245965 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:19.263647 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:19.658252 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:19.744533 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:19.763094 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:20.158995 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:20.244904 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:20.264069 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:20.659222 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:20.745307 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:20.767942 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:21.158604 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:21.244421 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:21.263655 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:21.658538 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:21.744690 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:21.764000 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:22.158806 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:22.258155 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:22.269720 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:22.909957 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:22.915855 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:22.917536 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:23.158897 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:23.249153 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:23.263983 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:23.658757 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:23.744210 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:23.764220 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:24.160512 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:24.244097 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:24.262846 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:24.658921 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:24.745745 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:24.766442 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:25.159255 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:25.245610 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:25.263876 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:25.658684 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:25.744405 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:25.763470 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:26.158442 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:26.246425 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:26.264189 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:26.658759 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:26.744763 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:26.765084 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:27.158899 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:27.244431 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:27.274822 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:27.657937 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:27.745152 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:27.767096 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:28.159027 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:28.245005 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:28.263952 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:28.658881 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:28.744587 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:28.764379 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:29.157895 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:29.248182 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:29.262801 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:29.658708 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:29.744920 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:29.764698 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:30.159740 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:30.250916 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:30.269015 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:30.660585 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:30.746335 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:30.765947 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:31.158534 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:31.244162 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:31.262890 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:31.659437 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:31.744366 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:31.770665 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:32.158591 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:32.244284 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:32.263946 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:32.658547 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:32.743827 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:32.766552 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:33.158037 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:33.244831 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:33.263408 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:33.658607 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:33.744825 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:33.763684 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:34.158387 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:34.245009 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:34.263408 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:34.658450 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:34.745659 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:34.765843 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:35.204189 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:35.244703 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:35.263671 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:35.659068 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:35.744725 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:35.766094 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:36.158920 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:36.244751 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:36.263996 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:36.659513 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:36.744887 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:36.766165 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:37.192407 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:37.245421 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:37.263280 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:37.659293 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:37.745013 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:37.763793 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:38.159093 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:38.248245 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:38.263036 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:38.665053 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:38.745818 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:38.769380 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:39.159861 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:39.244926 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:39.264798 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:39.658225 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:39.745314 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:39.765626 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:40.159933 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:40.244731 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:40.263826 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:40.659640 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:40.744989 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:40.765420 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:41.159517 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:41.244065 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:41.262539 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:41.658448 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:41.744811 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:41.768138 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:42.159651 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:42.244291 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:42.263073 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:42.659134 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:42.744879 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:42.764581 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:43.159954 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:43.244512 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:43.263079 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:43.659648 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:43.744890 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:43.769236 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:44.158249 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:44.245098 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:44.264154 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:44.659971 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:44.744935 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:44.765397 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:45.160672 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:45.247699 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:45.266414 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 20:11:45.658826 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:45.745322 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:45.768121 1101872 kapi.go:107] duration metric: took 1m1.509824853s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 20:11:46.159408 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:46.245638 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:46.658655 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:46.746626 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:47.158377 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:47.245709 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:47.658682 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:47.744368 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:48.157871 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:48.244311 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:48.658290 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:48.745418 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:49.158111 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:49.245290 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:49.659849 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:49.744562 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:50.158902 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:50.244709 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:50.697406 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:50.744760 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:51.159232 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:51.244773 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:51.659495 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:51.744791 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:52.159073 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:52.245239 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:52.967179 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:52.967966 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:53.159247 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:53.246709 1101872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 20:11:53.658426 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:53.745633 1101872 kapi.go:107] duration metric: took 1m11.005447524s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 20:11:54.159727 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:54.658129 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:55.159067 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:55.659619 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:56.159541 1101872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 20:11:56.693859 1101872 kapi.go:107] duration metric: took 1m11.039120407s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 20:11:56.695355 1101872 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-877061 cluster.
	I0731 20:11:56.696942 1101872 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 20:11:56.698601 1101872 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 20:11:56.700451 1101872 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, default-storageclass, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0731 20:11:56.701672 1101872 addons.go:510] duration metric: took 1m21.899960186s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin default-storageclass metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0731 20:11:56.701726 1101872 start.go:246] waiting for cluster config update ...
	I0731 20:11:56.701756 1101872 start.go:255] writing updated cluster config ...
	I0731 20:11:56.702055 1101872 ssh_runner.go:195] Run: rm -f paused
	I0731 20:11:56.754324 1101872 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 20:11:56.756129 1101872 out.go:177] * Done! kubectl is now configured to use "addons-877061" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.517382086Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457114517358056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b87e81db-572e-4081-952c-8ddf6aad3045 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.517952734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e27c1c3-e312-4e1b-92ac-b43332d890f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.518015639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e27c1c3-e312-4e1b-92ac-b43332d890f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.518269209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09a6e5d911e37129b64f9156885b88d1f74bfc162f1183f59e99b56674c4d9b,PodSandboxId:231072105c2fa6a82019e5beef2c007912e58abad8b5e3b42f72d902386bd825,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722456933762698501,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-fkk6w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fdcbce7-a259-4fcd-aef3-8ab54876051a,},Annotations:map[string]string{io.kubernetes.container.hash: a27159d6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ed79ece6434902e25e0ec74d2983b653c8008b8f4963044b0c61df50efc72e,PodSandboxId:78658f5b203508746498fb38e171d76c1a51ab6587fadbdfe40e19a236040b5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722456793308527427,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb45e46-5ce9-4814-ac2f-70c117f17949,},Annotations:map[string]string{io.kubernet
es.container.hash: e82199f1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7e18596e52895b4b30f06599b3d0223eb4034cc81d6dc9ef78bcd6c08e619b9,PodSandboxId:6dd423c8300e75f8577fcb52591267f9465fa670a59a7b9cb1d9ea4249e0066b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722456721990591463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3308b08-e08d-41c3-a
546-08165ed612db,},Annotations:map[string]string{io.kubernetes.container.hash: a25f05ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70904c77dc12fd11a5e18da6a1fd199ddabc0e8aa0d260d2073b013f022f84a2,PodSandboxId:901d3cd76334c33e7d7f0f4ba4befd83b2a1aa92238e4916da02913cf2860bb7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722456680334060399,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-szt4w,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 815a74e0-c39f-4673-8b08-290908785d21,},Annotations:map[string]string{io.kubernetes.container.hash: c3cb6fe0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b2f64bdf439e9f995de453359ab78d94701db4529219fd02befc0f051f2484,PodSandboxId:16bd8b901e2caddc5136bbe6dd94f19b6307037f75c6636438bdba0a931a2610,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722456678075926870,Labels:map[string]string
{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4xqvn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c5a7cab5-2791-46a3-9285-ab8d99473e07,},Annotations:map[string]string{io.kubernetes.container.hash: 8b36b4bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f,PodSandboxId:ca23fe91f8d40900e69a35db93e27f3766d0e8281f9e1557d8828a77865dc36b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456640921609594,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edee967-79b7-490d-baf7-7412a25fc2c7,},Annotations:map[string]string{io.kubernetes.container.hash: b4844227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9,PodSandboxId:e4cf51a462481ab57d0e23c5d3b39360f90563142685657a33f08f879a0c4483,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722456639463686470,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pjvjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e01b9e3f-5d75-4f28-bef3-a1160ea25c49,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa3af44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59,PodSandboxId:46e85184ea23c9e00a3b3ec7bf10af2bc7fd092045a7e33e32892dd4247df3c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722456637793743117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h92bj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dac7096-4089-4931-8b7d-506f46fa30aa,},Annotations:map[string]string{io.kubernetes.container.hash: a9475b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d,PodSandboxId:0c9de8a2421446446692ce7f4c0139e3915bd2e7281444dcb0e2152b13c26c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722456616248535140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8545890710ff8e55235cd8b56c9bd130,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe7ee0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127,PodSandboxId:b1926e49ea823fad38d83b3385151f9142adcbf5e12c1b635db6d4db5e541f22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722456616233573652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdcb07f1ef63f01aa0b2ebd054db4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246,PodSandboxId:f705026c2f1a0966928f4fa4e02c98683ebbc8f1225bf04d58b84f8fe0b8e3eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722456616251188781,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f41a488273306fb4b2089e293226dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c848a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61,PodSandboxId:a0da43ad405589c9bbbdf18882ed6f963837caed5829dde3def79a0ca130d5ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722456616175720705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d9e7287c272d7d787f5206890a8f0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e27c1c3-e312-4e1b-92ac-b43332d890f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.551026246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24b331de-dc62-480c-8b53-8c8bc55c275e name=/runtime.v1.RuntimeService/Version
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.551116504Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24b331de-dc62-480c-8b53-8c8bc55c275e name=/runtime.v1.RuntimeService/Version
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.552400361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ea7e09d-ddba-4892-a09f-129e506d235d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.553769775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457114553739937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ea7e09d-ddba-4892-a09f-129e506d235d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.554275932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7580d93e-e893-41b1-81f8-f36a5a1e0be5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.554341558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7580d93e-e893-41b1-81f8-f36a5a1e0be5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.554578671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09a6e5d911e37129b64f9156885b88d1f74bfc162f1183f59e99b56674c4d9b,PodSandboxId:231072105c2fa6a82019e5beef2c007912e58abad8b5e3b42f72d902386bd825,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722456933762698501,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-fkk6w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fdcbce7-a259-4fcd-aef3-8ab54876051a,},Annotations:map[string]string{io.kubernetes.container.hash: a27159d6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ed79ece6434902e25e0ec74d2983b653c8008b8f4963044b0c61df50efc72e,PodSandboxId:78658f5b203508746498fb38e171d76c1a51ab6587fadbdfe40e19a236040b5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722456793308527427,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb45e46-5ce9-4814-ac2f-70c117f17949,},Annotations:map[string]string{io.kubernet
es.container.hash: e82199f1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7e18596e52895b4b30f06599b3d0223eb4034cc81d6dc9ef78bcd6c08e619b9,PodSandboxId:6dd423c8300e75f8577fcb52591267f9465fa670a59a7b9cb1d9ea4249e0066b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722456721990591463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3308b08-e08d-41c3-a
546-08165ed612db,},Annotations:map[string]string{io.kubernetes.container.hash: a25f05ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70904c77dc12fd11a5e18da6a1fd199ddabc0e8aa0d260d2073b013f022f84a2,PodSandboxId:901d3cd76334c33e7d7f0f4ba4befd83b2a1aa92238e4916da02913cf2860bb7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722456680334060399,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-szt4w,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 815a74e0-c39f-4673-8b08-290908785d21,},Annotations:map[string]string{io.kubernetes.container.hash: c3cb6fe0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b2f64bdf439e9f995de453359ab78d94701db4529219fd02befc0f051f2484,PodSandboxId:16bd8b901e2caddc5136bbe6dd94f19b6307037f75c6636438bdba0a931a2610,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722456678075926870,Labels:map[string]string
{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4xqvn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c5a7cab5-2791-46a3-9285-ab8d99473e07,},Annotations:map[string]string{io.kubernetes.container.hash: 8b36b4bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f,PodSandboxId:ca23fe91f8d40900e69a35db93e27f3766d0e8281f9e1557d8828a77865dc36b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456640921609594,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edee967-79b7-490d-baf7-7412a25fc2c7,},Annotations:map[string]string{io.kubernetes.container.hash: b4844227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9,PodSandboxId:e4cf51a462481ab57d0e23c5d3b39360f90563142685657a33f08f879a0c4483,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722456639463686470,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pjvjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e01b9e3f-5d75-4f28-bef3-a1160ea25c49,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa3af44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59,PodSandboxId:46e85184ea23c9e00a3b3ec7bf10af2bc7fd092045a7e33e32892dd4247df3c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722456637793743117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h92bj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dac7096-4089-4931-8b7d-506f46fa30aa,},Annotations:map[string]string{io.kubernetes.container.hash: a9475b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d,PodSandboxId:0c9de8a2421446446692ce7f4c0139e3915bd2e7281444dcb0e2152b13c26c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722456616248535140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8545890710ff8e55235cd8b56c9bd130,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe7ee0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127,PodSandboxId:b1926e49ea823fad38d83b3385151f9142adcbf5e12c1b635db6d4db5e541f22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722456616233573652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdcb07f1ef63f01aa0b2ebd054db4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246,PodSandboxId:f705026c2f1a0966928f4fa4e02c98683ebbc8f1225bf04d58b84f8fe0b8e3eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722456616251188781,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f41a488273306fb4b2089e293226dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c848a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61,PodSandboxId:a0da43ad405589c9bbbdf18882ed6f963837caed5829dde3def79a0ca130d5ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722456616175720705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d9e7287c272d7d787f5206890a8f0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7580d93e-e893-41b1-81f8-f36a5a1e0be5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.589036904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=040433e4-b919-4cae-a3b1-52d5fc322c17 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.589117711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=040433e4-b919-4cae-a3b1-52d5fc322c17 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.590151829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=056b6751-0a64-4426-8582-b9450c3aed24 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.591329510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457114591290412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=056b6751-0a64-4426-8582-b9450c3aed24 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.591779226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48d5dc4e-e6e9-4b24-bcfd-e878b87bb4fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.591858422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48d5dc4e-e6e9-4b24-bcfd-e878b87bb4fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.592289865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09a6e5d911e37129b64f9156885b88d1f74bfc162f1183f59e99b56674c4d9b,PodSandboxId:231072105c2fa6a82019e5beef2c007912e58abad8b5e3b42f72d902386bd825,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722456933762698501,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-fkk6w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fdcbce7-a259-4fcd-aef3-8ab54876051a,},Annotations:map[string]string{io.kubernetes.container.hash: a27159d6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ed79ece6434902e25e0ec74d2983b653c8008b8f4963044b0c61df50efc72e,PodSandboxId:78658f5b203508746498fb38e171d76c1a51ab6587fadbdfe40e19a236040b5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722456793308527427,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb45e46-5ce9-4814-ac2f-70c117f17949,},Annotations:map[string]string{io.kubernet
es.container.hash: e82199f1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7e18596e52895b4b30f06599b3d0223eb4034cc81d6dc9ef78bcd6c08e619b9,PodSandboxId:6dd423c8300e75f8577fcb52591267f9465fa670a59a7b9cb1d9ea4249e0066b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722456721990591463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3308b08-e08d-41c3-a
546-08165ed612db,},Annotations:map[string]string{io.kubernetes.container.hash: a25f05ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70904c77dc12fd11a5e18da6a1fd199ddabc0e8aa0d260d2073b013f022f84a2,PodSandboxId:901d3cd76334c33e7d7f0f4ba4befd83b2a1aa92238e4916da02913cf2860bb7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722456680334060399,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-szt4w,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 815a74e0-c39f-4673-8b08-290908785d21,},Annotations:map[string]string{io.kubernetes.container.hash: c3cb6fe0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b2f64bdf439e9f995de453359ab78d94701db4529219fd02befc0f051f2484,PodSandboxId:16bd8b901e2caddc5136bbe6dd94f19b6307037f75c6636438bdba0a931a2610,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722456678075926870,Labels:map[string]string
{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4xqvn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c5a7cab5-2791-46a3-9285-ab8d99473e07,},Annotations:map[string]string{io.kubernetes.container.hash: 8b36b4bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f,PodSandboxId:ca23fe91f8d40900e69a35db93e27f3766d0e8281f9e1557d8828a77865dc36b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456640921609594,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edee967-79b7-490d-baf7-7412a25fc2c7,},Annotations:map[string]string{io.kubernetes.container.hash: b4844227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9,PodSandboxId:e4cf51a462481ab57d0e23c5d3b39360f90563142685657a33f08f879a0c4483,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722456639463686470,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pjvjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e01b9e3f-5d75-4f28-bef3-a1160ea25c49,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa3af44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59,PodSandboxId:46e85184ea23c9e00a3b3ec7bf10af2bc7fd092045a7e33e32892dd4247df3c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722456637793743117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h92bj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dac7096-4089-4931-8b7d-506f46fa30aa,},Annotations:map[string]string{io.kubernetes.container.hash: a9475b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d,PodSandboxId:0c9de8a2421446446692ce7f4c0139e3915bd2e7281444dcb0e2152b13c26c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722456616248535140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8545890710ff8e55235cd8b56c9bd130,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe7ee0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127,PodSandboxId:b1926e49ea823fad38d83b3385151f9142adcbf5e12c1b635db6d4db5e541f22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722456616233573652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdcb07f1ef63f01aa0b2ebd054db4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246,PodSandboxId:f705026c2f1a0966928f4fa4e02c98683ebbc8f1225bf04d58b84f8fe0b8e3eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722456616251188781,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f41a488273306fb4b2089e293226dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c848a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61,PodSandboxId:a0da43ad405589c9bbbdf18882ed6f963837caed5829dde3def79a0ca130d5ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722456616175720705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d9e7287c272d7d787f5206890a8f0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48d5dc4e-e6e9-4b24-bcfd-e878b87bb4fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.620429897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a09a1cf-4183-4b7c-b086-4aa0bf7fd303 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.620495911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a09a1cf-4183-4b7c-b086-4aa0bf7fd303 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.621929128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb73c902-6df6-491e-ab57-9127c3c0db18 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.623375671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457114623350958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb73c902-6df6-491e-ab57-9127c3c0db18 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.623796449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7053a31-c068-4e25-bca3-d8222d186bbe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.623878337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7053a31-c068-4e25-bca3-d8222d186bbe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:18:34 addons-877061 crio[680]: time="2024-07-31 20:18:34.624129696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09a6e5d911e37129b64f9156885b88d1f74bfc162f1183f59e99b56674c4d9b,PodSandboxId:231072105c2fa6a82019e5beef2c007912e58abad8b5e3b42f72d902386bd825,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722456933762698501,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-fkk6w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fdcbce7-a259-4fcd-aef3-8ab54876051a,},Annotations:map[string]string{io.kubernetes.container.hash: a27159d6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ed79ece6434902e25e0ec74d2983b653c8008b8f4963044b0c61df50efc72e,PodSandboxId:78658f5b203508746498fb38e171d76c1a51ab6587fadbdfe40e19a236040b5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722456793308527427,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb45e46-5ce9-4814-ac2f-70c117f17949,},Annotations:map[string]string{io.kubernet
es.container.hash: e82199f1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7e18596e52895b4b30f06599b3d0223eb4034cc81d6dc9ef78bcd6c08e619b9,PodSandboxId:6dd423c8300e75f8577fcb52591267f9465fa670a59a7b9cb1d9ea4249e0066b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722456721990591463,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3308b08-e08d-41c3-a
546-08165ed612db,},Annotations:map[string]string{io.kubernetes.container.hash: a25f05ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70904c77dc12fd11a5e18da6a1fd199ddabc0e8aa0d260d2073b013f022f84a2,PodSandboxId:901d3cd76334c33e7d7f0f4ba4befd83b2a1aa92238e4916da02913cf2860bb7,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722456680334060399,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-szt4w,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 815a74e0-c39f-4673-8b08-290908785d21,},Annotations:map[string]string{io.kubernetes.container.hash: c3cb6fe0,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b2f64bdf439e9f995de453359ab78d94701db4529219fd02befc0f051f2484,PodSandboxId:16bd8b901e2caddc5136bbe6dd94f19b6307037f75c6636438bdba0a931a2610,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722456678075926870,Labels:map[string]string
{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4xqvn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c5a7cab5-2791-46a3-9285-ab8d99473e07,},Annotations:map[string]string{io.kubernetes.container.hash: 8b36b4bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f,PodSandboxId:ca23fe91f8d40900e69a35db93e27f3766d0e8281f9e1557d8828a77865dc36b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456640921609594,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0edee967-79b7-490d-baf7-7412a25fc2c7,},Annotations:map[string]string{io.kubernetes.container.hash: b4844227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9,PodSandboxId:e4cf51a462481ab57d0e23c5d3b39360f90563142685657a33f08f879a0c4483,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722456639463686470,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pjvjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e01b9e3f-5d75-4f28-bef3-a1160ea25c49,},Annotations:map[string]string{io.kubernetes.container.hash: 3fa3af44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59,PodSandboxId:46e85184ea23c9e00a3b3ec7bf10af2bc7fd092045a7e33e32892dd4247df3c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722456637793743117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h92bj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dac7096-4089-4931-8b7d-506f46fa30aa,},Annotations:map[string]string{io.kubernetes.container.hash: a9475b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d,PodSandboxId:0c9de8a2421446446692ce7f4c0139e3915bd2e7281444dcb0e2152b13c26c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722456616248535140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8545890710ff8e55235cd8b56c9bd130,},Annotations:map[string]string{io.kubernetes.container.hash: 8fe7ee0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127,PodSandboxId:b1926e49ea823fad38d83b3385151f9142adcbf5e12c1b635db6d4db5e541f22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722456616233573652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efdcb07f1ef63f01aa0b2ebd054db4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246,PodSandboxId:f705026c2f1a0966928f4fa4e02c98683ebbc8f1225bf04d58b84f8fe0b8e3eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722456616251188781,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f41a488273306fb4b2089e293226dcd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c848a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61,PodSandboxId:a0da43ad405589c9bbbdf18882ed6f963837caed5829dde3def79a0ca130d5ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722456616175720705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-877061,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d9e7287c272d7d787f5206890a8f0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7053a31-c068-4e25-bca3-d8222d186bbe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f09a6e5d911e3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   231072105c2fa       hello-world-app-6778b5fc9f-fkk6w
	27ed79ece6434       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   78658f5b20350       nginx
	f7e18596e5289       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   6dd423c8300e7       busybox
	70904c77dc12f       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   901d3cd76334c       metrics-server-c59844bb4-szt4w
	07b2f64bdf439       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   16bd8b901e2ca       local-path-provisioner-8d985888d-4xqvn
	903f06add0040       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   ca23fe91f8d40       storage-provisioner
	8a5f6a95d494c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   e4cf51a462481       coredns-7db6d8ff4d-pjvjp
	904008fcac960       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   46e85184ea23c       kube-proxy-h92bj
	63b7ef3dfd3ef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   f705026c2f1a0       etcd-addons-877061
	dd30ab8ea22e5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   0c9de8a242144       kube-apiserver-addons-877061
	890c7aa8247d6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   b1926e49ea823       kube-controller-manager-addons-877061
	eab1dd8098cb3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   a0da43ad40558       kube-scheduler-addons-877061
	
	
	==> coredns [8a5f6a95d494c41dee38e9eeb00fe59265ab504ea6e0bfb17d1c6958db315be9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52803 - 45331 "HINFO IN 8583045780429383597.6288047007275923205. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017922286s
	[INFO] 10.244.0.22:50348 - 51016 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000290184s
	[INFO] 10.244.0.22:55545 - 31557 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108354s
	[INFO] 10.244.0.22:57511 - 14748 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127398s
	[INFO] 10.244.0.22:46889 - 64014 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00005389s
	[INFO] 10.244.0.22:52931 - 61483 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089473s
	[INFO] 10.244.0.22:59676 - 28964 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090484s
	[INFO] 10.244.0.22:41086 - 48020 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.0006069s
	[INFO] 10.244.0.22:37056 - 33957 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000907384s
	[INFO] 10.244.0.27:55542 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000331615s
	[INFO] 10.244.0.27:51183 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000150636s
	
	
	==> describe nodes <==
	Name:               addons-877061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-877061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=addons-877061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_10_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-877061
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-877061
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:18:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:15:58 +0000   Wed, 31 Jul 2024 20:10:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:15:58 +0000   Wed, 31 Jul 2024 20:10:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:15:58 +0000   Wed, 31 Jul 2024 20:10:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:15:58 +0000   Wed, 31 Jul 2024 20:10:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.25
	  Hostname:    addons-877061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 28529e108a6949f1a8866ba1ce22684c
	  System UUID:                28529e10-8a69-49f1-a886-6ba1ce22684c
	  Boot ID:                    dba12f0b-0959-4974-9125-d040b0981d4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  default                     hello-world-app-6778b5fc9f-fkk6w          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 coredns-7db6d8ff4d-pjvjp                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m59s
	  kube-system                 etcd-addons-877061                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-apiserver-addons-877061              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-controller-manager-addons-877061     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-proxy-h92bj                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  kube-system                 kube-scheduler-addons-877061              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 metrics-server-c59844bb4-szt4w            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m55s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  local-path-storage          local-path-provisioner-8d985888d-4xqvn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node addons-877061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node addons-877061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m19s (x7 over 8m19s)  kubelet          Node addons-877061 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m13s                  kubelet          Node addons-877061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m13s                  kubelet          Node addons-877061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m13s                  kubelet          Node addons-877061 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m12s                  kubelet          Node addons-877061 status is now: NodeReady
	  Normal  RegisteredNode           8m                     node-controller  Node addons-877061 event: Registered Node addons-877061 in Controller
	
	
	==> dmesg <==
	[  +5.549922] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.859458] kauditd_printk_skb: 5 callbacks suppressed
	[Jul31 20:11] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.459873] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.453968] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.373969] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.477658] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.431468] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.874397] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.481746] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.014635] kauditd_printk_skb: 41 callbacks suppressed
	[Jul31 20:12] kauditd_printk_skb: 13 callbacks suppressed
	[ +25.785959] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.177692] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.982729] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.105439] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.185905] kauditd_printk_skb: 27 callbacks suppressed
	[Jul31 20:13] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.798852] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.116988] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.663041] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.834836] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.790424] kauditd_printk_skb: 19 callbacks suppressed
	[Jul31 20:15] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.542635] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [63b7ef3dfd3ef1c5e1f9edae4029d81dba4b67257179acede3958495a440e246] <==
	{"level":"warn","ts":"2024-07-31T20:11:50.677846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:11:50.32471Z","time spent":"353.048051ms","remote":"127.0.0.1:46244","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1147 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-07-31T20:11:50.678157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.504806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.25\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-31T20:11:50.678245Z","caller":"traceutil/trace.go:171","msg":"trace[108960031] range","detail":"{range_begin:/registry/masterleases/192.168.39.25; range_end:; response_count:1; response_revision:1156; }","duration":"336.614616ms","start":"2024-07-31T20:11:50.341622Z","end":"2024-07-31T20:11:50.678237Z","steps":["trace[108960031] 'agreement among raft nodes before linearized reading'  (duration: 336.438455ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:50.678337Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:11:50.341609Z","time spent":"336.719525ms","remote":"127.0.0.1:46018","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.39.25\" "}
	{"level":"warn","ts":"2024-07-31T20:11:50.679082Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.066298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-szt4w.17e76539be0ff077\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-07-31T20:11:50.679276Z","caller":"traceutil/trace.go:171","msg":"trace[1168345062] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-szt4w.17e76539be0ff077; range_end:; response_count:1; response_revision:1156; }","duration":"326.282216ms","start":"2024-07-31T20:11:50.352984Z","end":"2024-07-31T20:11:50.679266Z","steps":["trace[1168345062] 'agreement among raft nodes before linearized reading'  (duration: 325.753926ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:50.679399Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:11:50.352972Z","time spent":"326.345454ms","remote":"127.0.0.1:46050","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":836,"request content":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-szt4w.17e76539be0ff077\" "}
	{"level":"warn","ts":"2024-07-31T20:11:52.951749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.489434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T20:11:52.951812Z","caller":"traceutil/trace.go:171","msg":"trace[1233893196] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1162; }","duration":"187.579579ms","start":"2024-07-31T20:11:52.76422Z","end":"2024-07-31T20:11:52.9518Z","steps":["trace[1233893196] 'range keys from in-memory index tree'  (duration: 187.442425ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:52.952106Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.732675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14357"}
	{"level":"info","ts":"2024-07-31T20:11:52.95296Z","caller":"traceutil/trace.go:171","msg":"trace[196402797] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1162; }","duration":"221.623878ms","start":"2024-07-31T20:11:52.731324Z","end":"2024-07-31T20:11:52.952948Z","steps":["trace[196402797] 'range keys from in-memory index tree'  (duration: 220.627812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:52.952121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.231494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11447"}
	{"level":"info","ts":"2024-07-31T20:11:52.953172Z","caller":"traceutil/trace.go:171","msg":"trace[1408912019] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1162; }","duration":"306.312141ms","start":"2024-07-31T20:11:52.646851Z","end":"2024-07-31T20:11:52.953163Z","steps":["trace[1408912019] 'range keys from in-memory index tree'  (duration: 305.140525ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:52.953236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:11:52.646809Z","time spent":"306.41817ms","remote":"127.0.0.1:46156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11470,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-31T20:11:56.675581Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.998746ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4688406686009807783 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/gcp-auth/gcp-auth-skpqf\" mod_revision:843 > success:<request_put:<key:\"/registry/endpointslices/gcp-auth/gcp-auth-skpqf\" value_size:1034 >> failure:<request_range:<key:\"/registry/endpointslices/gcp-auth/gcp-auth-skpqf\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:11:56.675744Z","caller":"traceutil/trace.go:171","msg":"trace[267995728] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1232; }","duration":"219.182039ms","start":"2024-07-31T20:11:56.456553Z","end":"2024-07-31T20:11:56.675735Z","steps":["trace[267995728] 'read index received'  (duration: 24.95201ms)","trace[267995728] 'applied index is now lower than readState.Index'  (duration: 194.22941ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T20:11:56.675959Z","caller":"traceutil/trace.go:171","msg":"trace[1281828029] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"241.86725ms","start":"2024-07-31T20:11:56.43408Z","end":"2024-07-31T20:11:56.675947Z","steps":["trace[1281828029] 'process raft request'  (duration: 47.369867ms)","trace[1281828029] 'compare'  (duration: 193.930581ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T20:11:56.676149Z","caller":"traceutil/trace.go:171","msg":"trace[317476393] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"241.123472ms","start":"2024-07-31T20:11:56.43502Z","end":"2024-07-31T20:11:56.676143Z","steps":["trace[317476393] 'process raft request'  (duration: 240.643621ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:11:56.676289Z","caller":"traceutil/trace.go:171","msg":"trace[740292274] transaction","detail":"{read_only:false; response_revision:1200; number_of_response:1; }","duration":"240.328144ms","start":"2024-07-31T20:11:56.435955Z","end":"2024-07-31T20:11:56.676283Z","steps":["trace[740292274] 'process raft request'  (duration: 239.753437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:56.67647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.906632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-31T20:11:56.67651Z","caller":"traceutil/trace.go:171","msg":"trace[2035162202] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1200; }","duration":"219.981002ms","start":"2024-07-31T20:11:56.45652Z","end":"2024-07-31T20:11:56.676502Z","steps":["trace[2035162202] 'agreement among raft nodes before linearized reading'  (duration: 219.842754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:11:56.676617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.551871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/yakd-dashboard/\" range_end:\"/registry/secrets/yakd-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T20:11:56.676649Z","caller":"traceutil/trace.go:171","msg":"trace[545984218] range","detail":"{range_begin:/registry/secrets/yakd-dashboard/; range_end:/registry/secrets/yakd-dashboard0; response_count:0; response_revision:1200; }","duration":"204.612811ms","start":"2024-07-31T20:11:56.47203Z","end":"2024-07-31T20:11:56.676643Z","steps":["trace[545984218] 'agreement among raft nodes before linearized reading'  (duration: 204.56753ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:12:53.770984Z","caller":"traceutil/trace.go:171","msg":"trace[1911453504] transaction","detail":"{read_only:false; response_revision:1459; number_of_response:1; }","duration":"210.416478ms","start":"2024-07-31T20:12:53.560546Z","end":"2024-07-31T20:12:53.770963Z","steps":["trace[1911453504] 'process raft request'  (duration: 210.23962ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:12:59.552557Z","caller":"traceutil/trace.go:171","msg":"trace[2010892642] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"164.889858ms","start":"2024-07-31T20:12:59.38765Z","end":"2024-07-31T20:12:59.55254Z","steps":["trace[2010892642] 'process raft request'  (duration: 164.790117ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:18:35 up 8 min,  0 users,  load average: 0.18, 0.50, 0.39
	Linux addons-877061 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dd30ab8ea22e53d3b64936dd3e4a90b0cc1daa34112fba8634df746fd037453d] <==
	W0731 20:12:35.393009       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 20:12:35.393152       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0731 20:12:35.393066       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.249.17:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.249.17:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	I0731 20:12:35.407501       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0731 20:13:00.767694       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 20:13:09.232151       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0731 20:13:09.435172       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 20:13:09.618321       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.48.131"}
	W0731 20:13:10.274026       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 20:13:15.336383       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.51.25"}
	I0731 20:13:23.073580       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.073882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 20:13:23.098979       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.099082       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 20:13:23.192410       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.192518       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 20:13:23.193029       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.193100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 20:13:23.208160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 20:13:23.208202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 20:13:24.204563       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 20:13:24.208750       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 20:13:24.232091       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 20:15:31.341411       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.216.116"}
	
	
	==> kube-controller-manager [890c7aa8247d6afc812d9a59063b8f45e559f174205428849643df77460f4127] <==
	W0731 20:16:23.373429       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:16:23.373463       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:16:45.375796       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:16:45.375874       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:16:55.788514       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:16:55.788640       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:16:59.857758       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:16:59.857949       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:17:06.406376       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:17:06.406421       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:17:28.050619       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:17:28.050670       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:17:34.104995       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:17:34.105091       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:17:39.499754       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:17:39.499928       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:18:00.729973       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:18:00.730017       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:18:13.849008       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:18:13.849053       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:18:16.384165       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:18:16.384267       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 20:18:26.473444       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 20:18:26.473488       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 20:18:33.701958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="5.984µs"
	
	
	==> kube-proxy [904008fcac960f56dc51f06e832d238d4ebb7f10ab0e74d7a7d4ba4a606b2e59] <==
	I0731 20:10:39.599531       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:10:39.830507       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.25"]
	I0731 20:10:40.574657       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:10:40.574721       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:10:40.574744       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:10:40.582911       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:10:40.583086       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:10:40.583096       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:10:40.595014       1 config.go:192] "Starting service config controller"
	I0731 20:10:40.595050       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:10:40.595070       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:10:40.595074       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:10:40.595428       1 config.go:319] "Starting node config controller"
	I0731 20:10:40.595433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:10:40.698923       1 shared_informer.go:320] Caches are synced for node config
	I0731 20:10:40.698952       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:10:40.698976       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eab1dd8098cb3735c55cecf05e5be9d8ec8ab02e1ed455f110175bfd33433e61] <==
	W0731 20:10:19.499378       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 20:10:19.499446       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:10:19.537506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:10:19.537547       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:10:19.538319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 20:10:19.538377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 20:10:19.548547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 20:10:19.548609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 20:10:19.611374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 20:10:19.611418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 20:10:19.637781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 20:10:19.637888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 20:10:19.642520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 20:10:19.642586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:10:19.660795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 20:10:19.660861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 20:10:19.704979       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 20:10:19.705026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 20:10:19.770043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:10:19.770404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 20:10:19.794661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 20:10:19.794703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 20:10:19.926648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:10:19.926780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0731 20:10:22.085179       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 20:16:21 addons-877061 kubelet[1280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:16:21 addons-877061 kubelet[1280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:16:21 addons-877061 kubelet[1280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:16:21 addons-877061 kubelet[1280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:16:23 addons-877061 kubelet[1280]: I0731 20:16:23.784353    1280 scope.go:117] "RemoveContainer" containerID="8b487260b826cc5fc5a514d220b98c451b52113ca247a4756423a3beaf171809"
	Jul 31 20:16:23 addons-877061 kubelet[1280]: I0731 20:16:23.803338    1280 scope.go:117] "RemoveContainer" containerID="7272caf407db1d7c296b2ec8e8a82c20ba6ec86e86c131747b2b24e756df5a2b"
	Jul 31 20:17:00 addons-877061 kubelet[1280]: I0731 20:17:00.359245    1280 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 20:17:21 addons-877061 kubelet[1280]: E0731 20:17:21.380164    1280 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:17:21 addons-877061 kubelet[1280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:17:21 addons-877061 kubelet[1280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:17:21 addons-877061 kubelet[1280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:17:21 addons-877061 kubelet[1280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:18:14 addons-877061 kubelet[1280]: I0731 20:18:14.360181    1280 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 20:18:21 addons-877061 kubelet[1280]: E0731 20:18:21.380178    1280 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:18:21 addons-877061 kubelet[1280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:18:21 addons-877061 kubelet[1280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:18:21 addons-877061 kubelet[1280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:18:21 addons-877061 kubelet[1280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:18:33 addons-877061 kubelet[1280]: I0731 20:18:33.718183    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-fkk6w" podStartSLOduration=180.688340528 podStartE2EDuration="3m2.718144551s" podCreationTimestamp="2024-07-31 20:15:31 +0000 UTC" firstStartedPulling="2024-07-31 20:15:31.714998013 +0000 UTC m=+310.464898274" lastFinishedPulling="2024-07-31 20:15:33.744802035 +0000 UTC m=+312.494702297" observedRunningTime="2024-07-31 20:15:34.137681955 +0000 UTC m=+312.887582247" watchObservedRunningTime="2024-07-31 20:18:33.718144551 +0000 UTC m=+492.468044825"
	Jul 31 20:18:35 addons-877061 kubelet[1280]: I0731 20:18:35.028905    1280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnxzj\" (UniqueName: \"kubernetes.io/projected/815a74e0-c39f-4673-8b08-290908785d21-kube-api-access-tnxzj\") pod \"815a74e0-c39f-4673-8b08-290908785d21\" (UID: \"815a74e0-c39f-4673-8b08-290908785d21\") "
	Jul 31 20:18:35 addons-877061 kubelet[1280]: I0731 20:18:35.028955    1280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/815a74e0-c39f-4673-8b08-290908785d21-tmp-dir\") pod \"815a74e0-c39f-4673-8b08-290908785d21\" (UID: \"815a74e0-c39f-4673-8b08-290908785d21\") "
	Jul 31 20:18:35 addons-877061 kubelet[1280]: I0731 20:18:35.029268    1280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815a74e0-c39f-4673-8b08-290908785d21-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "815a74e0-c39f-4673-8b08-290908785d21" (UID: "815a74e0-c39f-4673-8b08-290908785d21"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 31 20:18:35 addons-877061 kubelet[1280]: I0731 20:18:35.032001    1280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815a74e0-c39f-4673-8b08-290908785d21-kube-api-access-tnxzj" (OuterVolumeSpecName: "kube-api-access-tnxzj") pod "815a74e0-c39f-4673-8b08-290908785d21" (UID: "815a74e0-c39f-4673-8b08-290908785d21"). InnerVolumeSpecName "kube-api-access-tnxzj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 20:18:35 addons-877061 kubelet[1280]: I0731 20:18:35.129479    1280 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/815a74e0-c39f-4673-8b08-290908785d21-tmp-dir\") on node \"addons-877061\" DevicePath \"\""
	Jul 31 20:18:35 addons-877061 kubelet[1280]: I0731 20:18:35.129506    1280 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tnxzj\" (UniqueName: \"kubernetes.io/projected/815a74e0-c39f-4673-8b08-290908785d21-kube-api-access-tnxzj\") on node \"addons-877061\" DevicePath \"\""
	
	
	==> storage-provisioner [903f06add004099c4fe2dff0db7bfcd9370e9816404818731003558509f6cc6f] <==
	I0731 20:10:41.971307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:10:41.989264       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:10:41.989313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 20:10:41.999219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 20:10:41.999399       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-877061_f5473ff4-88c1-48cb-8677-b79126ba55df!
	I0731 20:10:42.001061       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0892d660-346a-46f4-9d67-b3b13db61f13", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-877061_f5473ff4-88c1-48cb-8677-b79126ba55df became leader
	I0731 20:10:42.100407       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-877061_f5473ff4-88c1-48cb-8677-b79126ba55df!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-877061 -n addons-877061
helpers_test.go:261: (dbg) Run:  kubectl --context addons-877061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-szt4w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-877061 describe pod metrics-server-c59844bb4-szt4w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-877061 describe pod metrics-server-c59844bb4-szt4w: exit status 1 (73.449131ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-szt4w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-877061 describe pod metrics-server-c59844bb4-szt4w: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (356.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-877061
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-877061: exit status 82 (2m0.47095373s)

                                                
                                                
-- stdout --
	* Stopping node "addons-877061"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-877061" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-877061
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-877061: exit status 11 (21.611123411s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.25:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-877061" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-877061
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-877061: exit status 11 (6.143748275s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.25:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-877061" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-877061
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-877061: exit status 11 (6.144024491s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.25:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-877061" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 node stop m02 -v=7 --alsologtostderr
E0731 20:30:12.318311 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:30:53.279161 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:32:00.018522 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.460180456s)

                                                
                                                
-- stdout --
	* Stopping node "ha-430887-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:29:59.924798 1115958 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:29:59.925058 1115958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:29:59.925068 1115958 out.go:304] Setting ErrFile to fd 2...
	I0731 20:29:59.925073 1115958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:29:59.925249 1115958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:29:59.925479 1115958 mustload.go:65] Loading cluster: ha-430887
	I0731 20:29:59.925854 1115958 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:29:59.925875 1115958 stop.go:39] StopHost: ha-430887-m02
	I0731 20:29:59.926251 1115958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:29:59.926299 1115958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:29:59.942678 1115958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0731 20:29:59.943176 1115958 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:29:59.943836 1115958 main.go:141] libmachine: Using API Version  1
	I0731 20:29:59.943856 1115958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:29:59.944279 1115958 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:29:59.946466 1115958 out.go:177] * Stopping node "ha-430887-m02"  ...
	I0731 20:29:59.947797 1115958 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 20:29:59.947834 1115958 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:29:59.948072 1115958 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 20:29:59.948110 1115958 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:29:59.950948 1115958 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:29:59.951369 1115958 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:29:59.951407 1115958 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:29:59.951532 1115958 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:29:59.951696 1115958 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:29:59.951856 1115958 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:29:59.951990 1115958 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:30:00.039336 1115958 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 20:30:00.091582 1115958 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 20:30:00.144419 1115958 main.go:141] libmachine: Stopping "ha-430887-m02"...
	I0731 20:30:00.144455 1115958 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:30:00.146245 1115958 main.go:141] libmachine: (ha-430887-m02) Calling .Stop
	I0731 20:30:00.149879 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 0/120
	I0731 20:30:01.151274 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 1/120
	I0731 20:30:02.153248 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 2/120
	I0731 20:30:03.155086 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 3/120
	I0731 20:30:04.156460 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 4/120
	I0731 20:30:05.157741 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 5/120
	I0731 20:30:06.159211 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 6/120
	I0731 20:30:07.160581 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 7/120
	I0731 20:30:08.162497 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 8/120
	I0731 20:30:09.163704 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 9/120
	I0731 20:30:10.165864 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 10/120
	I0731 20:30:11.167557 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 11/120
	I0731 20:30:12.169879 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 12/120
	I0731 20:30:13.171877 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 13/120
	I0731 20:30:14.173725 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 14/120
	I0731 20:30:15.175438 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 15/120
	I0731 20:30:16.176876 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 16/120
	I0731 20:30:17.178695 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 17/120
	I0731 20:30:18.180001 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 18/120
	I0731 20:30:19.181263 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 19/120
	I0731 20:30:20.183265 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 20/120
	I0731 20:30:21.184719 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 21/120
	I0731 20:30:22.186929 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 22/120
	I0731 20:30:23.189020 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 23/120
	I0731 20:30:24.190645 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 24/120
	I0731 20:30:25.192620 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 25/120
	I0731 20:30:26.194728 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 26/120
	I0731 20:30:27.196054 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 27/120
	I0731 20:30:28.197391 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 28/120
	I0731 20:30:29.198703 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 29/120
	I0731 20:30:30.200648 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 30/120
	I0731 20:30:31.201921 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 31/120
	I0731 20:30:32.203214 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 32/120
	I0731 20:30:33.204637 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 33/120
	I0731 20:30:34.206530 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 34/120
	I0731 20:30:35.207939 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 35/120
	I0731 20:30:36.209387 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 36/120
	I0731 20:30:37.211574 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 37/120
	I0731 20:30:38.212849 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 38/120
	I0731 20:30:39.214217 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 39/120
	I0731 20:30:40.216444 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 40/120
	I0731 20:30:41.218606 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 41/120
	I0731 20:30:42.219959 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 42/120
	I0731 20:30:43.221292 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 43/120
	I0731 20:30:44.223081 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 44/120
	I0731 20:30:45.225733 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 45/120
	I0731 20:30:46.227111 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 46/120
	I0731 20:30:47.229326 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 47/120
	I0731 20:30:48.230586 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 48/120
	I0731 20:30:49.231925 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 49/120
	I0731 20:30:50.233998 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 50/120
	I0731 20:30:51.235306 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 51/120
	I0731 20:30:52.236545 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 52/120
	I0731 20:30:53.237762 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 53/120
	I0731 20:30:54.239347 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 54/120
	I0731 20:30:55.240950 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 55/120
	I0731 20:30:56.242159 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 56/120
	I0731 20:30:57.243631 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 57/120
	I0731 20:30:58.244823 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 58/120
	I0731 20:30:59.246501 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 59/120
	I0731 20:31:00.247850 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 60/120
	I0731 20:31:01.249189 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 61/120
	I0731 20:31:02.250605 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 62/120
	I0731 20:31:03.251870 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 63/120
	I0731 20:31:04.253206 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 64/120
	I0731 20:31:05.255077 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 65/120
	I0731 20:31:06.256171 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 66/120
	I0731 20:31:07.258045 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 67/120
	I0731 20:31:08.259197 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 68/120
	I0731 20:31:09.260543 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 69/120
	I0731 20:31:10.262629 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 70/120
	I0731 20:31:11.263905 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 71/120
	I0731 20:31:12.265560 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 72/120
	I0731 20:31:13.266715 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 73/120
	I0731 20:31:14.268001 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 74/120
	I0731 20:31:15.269707 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 75/120
	I0731 20:31:16.271016 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 76/120
	I0731 20:31:17.272721 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 77/120
	I0731 20:31:18.274084 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 78/120
	I0731 20:31:19.275644 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 79/120
	I0731 20:31:20.277804 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 80/120
	I0731 20:31:21.279233 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 81/120
	I0731 20:31:22.280825 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 82/120
	I0731 20:31:23.282388 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 83/120
	I0731 20:31:24.283659 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 84/120
	I0731 20:31:25.285822 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 85/120
	I0731 20:31:26.287487 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 86/120
	I0731 20:31:27.288716 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 87/120
	I0731 20:31:28.290576 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 88/120
	I0731 20:31:29.291800 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 89/120
	I0731 20:31:30.293256 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 90/120
	I0731 20:31:31.294746 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 91/120
	I0731 20:31:32.296122 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 92/120
	I0731 20:31:33.297444 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 93/120
	I0731 20:31:34.298759 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 94/120
	I0731 20:31:35.300654 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 95/120
	I0731 20:31:36.302017 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 96/120
	I0731 20:31:37.303993 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 97/120
	I0731 20:31:38.305208 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 98/120
	I0731 20:31:39.307257 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 99/120
	I0731 20:31:40.309594 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 100/120
	I0731 20:31:41.311703 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 101/120
	I0731 20:31:42.313316 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 102/120
	I0731 20:31:43.314671 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 103/120
	I0731 20:31:44.316145 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 104/120
	I0731 20:31:45.317990 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 105/120
	I0731 20:31:46.319363 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 106/120
	I0731 20:31:47.320836 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 107/120
	I0731 20:31:48.322863 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 108/120
	I0731 20:31:49.324839 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 109/120
	I0731 20:31:50.327019 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 110/120
	I0731 20:31:51.328379 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 111/120
	I0731 20:31:52.330793 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 112/120
	I0731 20:31:53.331994 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 113/120
	I0731 20:31:54.333337 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 114/120
	I0731 20:31:55.335202 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 115/120
	I0731 20:31:56.336732 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 116/120
	I0731 20:31:57.337949 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 117/120
	I0731 20:31:58.339228 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 118/120
	I0731 20:31:59.340585 1115958 main.go:141] libmachine: (ha-430887-m02) Waiting for machine to stop 119/120
	I0731 20:32:00.341808 1115958 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 20:32:00.342000 1115958 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-430887 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
E0731 20:32:15.200248 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 3 (19.144320475s)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:32:00.388815 1116406 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:32:00.388925 1116406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:00.388933 1116406 out.go:304] Setting ErrFile to fd 2...
	I0731 20:32:00.388937 1116406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:00.389087 1116406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:32:00.389250 1116406 out.go:298] Setting JSON to false
	I0731 20:32:00.389279 1116406 mustload.go:65] Loading cluster: ha-430887
	I0731 20:32:00.389380 1116406 notify.go:220] Checking for updates...
	I0731 20:32:00.389703 1116406 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:32:00.389725 1116406 status.go:255] checking status of ha-430887 ...
	I0731 20:32:00.390225 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:00.390310 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:00.408655 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I0731 20:32:00.409110 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:00.409829 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:00.409889 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:00.410332 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:00.410576 1116406 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:32:00.412200 1116406 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:32:00.412229 1116406 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:00.412526 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:00.412599 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:00.427119 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I0731 20:32:00.427515 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:00.428136 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:00.428163 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:00.428493 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:00.428713 1116406 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:32:00.431181 1116406 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:00.431652 1116406 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:00.431687 1116406 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:00.431819 1116406 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:00.432150 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:00.432209 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:00.448463 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33359
	I0731 20:32:00.448854 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:00.449402 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:00.449431 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:00.449762 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:00.449989 1116406 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:32:00.450208 1116406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:00.450244 1116406 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:32:00.453439 1116406 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:00.453885 1116406 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:00.453907 1116406 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:00.454070 1116406 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:32:00.454234 1116406 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:32:00.454376 1116406 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:32:00.454492 1116406 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:32:00.536644 1116406 ssh_runner.go:195] Run: systemctl --version
	I0731 20:32:00.544535 1116406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:00.563358 1116406 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:00.563393 1116406 api_server.go:166] Checking apiserver status ...
	I0731 20:32:00.563426 1116406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:00.577807 1116406 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:32:00.588343 1116406 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:00.588397 1116406 ssh_runner.go:195] Run: ls
	I0731 20:32:00.593107 1116406 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:00.600016 1116406 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:00.600044 1116406 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:32:00.600055 1116406 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:00.600083 1116406 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:32:00.600424 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:00.600449 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:00.615621 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
	I0731 20:32:00.616105 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:00.616593 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:00.616616 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:00.616955 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:00.617207 1116406 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:32:00.618660 1116406 status.go:330] ha-430887-m02 host status = "Running" (err=<nil>)
	I0731 20:32:00.618678 1116406 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:00.618987 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:00.619017 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:00.633775 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46469
	I0731 20:32:00.634270 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:00.634779 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:00.634802 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:00.635169 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:00.635349 1116406 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:32:00.638188 1116406 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:00.638554 1116406 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:00.638588 1116406 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:00.638702 1116406 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:00.639002 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:00.639035 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:00.653743 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0731 20:32:00.654135 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:00.654611 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:00.654633 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:00.654962 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:00.655163 1116406 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:32:00.655360 1116406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:00.655383 1116406 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:32:00.658227 1116406 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:00.658657 1116406 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:00.658681 1116406 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:00.658827 1116406 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:32:00.659008 1116406 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:32:00.659151 1116406 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:32:00.659317 1116406 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	W0731 20:32:19.136359 1116406 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:19.136493 1116406 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	E0731 20:32:19.136514 1116406 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:19.136524 1116406 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:32:19.136546 1116406 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:19.136554 1116406 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:32:19.136873 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:19.136922 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:19.152270 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0731 20:32:19.152731 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:19.153191 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:19.153213 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:19.153509 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:19.153711 1116406 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:32:19.155207 1116406 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:32:19.155231 1116406 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:19.155527 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:19.155559 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:19.169894 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0731 20:32:19.170318 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:19.170761 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:19.170785 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:19.171090 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:19.171265 1116406 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:32:19.174073 1116406 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:19.174432 1116406 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:19.174458 1116406 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:19.174580 1116406 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:19.174886 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:19.174926 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:19.189305 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33007
	I0731 20:32:19.189753 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:19.190259 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:19.190279 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:19.190571 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:19.190844 1116406 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:32:19.191035 1116406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:19.191065 1116406 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:32:19.193587 1116406 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:19.194001 1116406 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:19.194024 1116406 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:19.194153 1116406 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:32:19.194329 1116406 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:32:19.194491 1116406 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:32:19.194685 1116406 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:32:19.272300 1116406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:19.291723 1116406 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:19.291755 1116406 api_server.go:166] Checking apiserver status ...
	I0731 20:32:19.291802 1116406 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:19.309128 1116406 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:32:19.320212 1116406 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:19.320261 1116406 ssh_runner.go:195] Run: ls
	I0731 20:32:19.326915 1116406 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:19.331489 1116406 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:19.331517 1116406 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:32:19.331530 1116406 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:19.331556 1116406 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:32:19.331946 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:19.331985 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:19.349467 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0731 20:32:19.349968 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:19.350530 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:19.350553 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:19.350936 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:19.351146 1116406 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:32:19.352628 1116406 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:32:19.352646 1116406 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:19.352971 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:19.353033 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:19.367838 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0731 20:32:19.368333 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:19.368868 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:19.368891 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:19.369163 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:19.369361 1116406 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:32:19.372024 1116406 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:19.372495 1116406 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:19.372544 1116406 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:19.372639 1116406 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:19.372937 1116406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:19.372971 1116406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:19.387217 1116406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43793
	I0731 20:32:19.387719 1116406 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:19.388349 1116406 main.go:141] libmachine: Using API Version  1
	I0731 20:32:19.388370 1116406 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:19.388747 1116406 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:19.388949 1116406 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:32:19.389143 1116406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:19.389165 1116406 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:32:19.391944 1116406 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:19.392408 1116406 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:19.392441 1116406 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:19.392555 1116406 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:32:19.392719 1116406 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:32:19.392871 1116406 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:32:19.393030 1116406 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:32:19.471109 1116406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:19.485297 1116406 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430887 -n ha-430887
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-430887 logs -n 25: (1.256518688s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887:/home/docker/cp-test_ha-430887-m03_ha-430887.txt                       |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887 sudo cat                                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887.txt                                 |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m02:/home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m02 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04:/home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m04 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp testdata/cp-test.txt                                                | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887:/home/docker/cp-test_ha-430887-m04_ha-430887.txt                       |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887 sudo cat                                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887.txt                                 |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m02:/home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m02 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03:/home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m03 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-430887 node stop m02 -v=7                                                     | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:25:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:25:18.910914 1111910 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:25:18.911204 1111910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:25:18.911214 1111910 out.go:304] Setting ErrFile to fd 2...
	I0731 20:25:18.911219 1111910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:25:18.911425 1111910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:25:18.912044 1111910 out.go:298] Setting JSON to false
	I0731 20:25:18.913045 1111910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14870,"bootTime":1722442649,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:25:18.913112 1111910 start.go:139] virtualization: kvm guest
	I0731 20:25:18.915390 1111910 out.go:177] * [ha-430887] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:25:18.916792 1111910 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:25:18.916791 1111910 notify.go:220] Checking for updates...
	I0731 20:25:18.919661 1111910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:25:18.921153 1111910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:25:18.922508 1111910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:25:18.923770 1111910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:25:18.925289 1111910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:25:18.926887 1111910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:25:18.962913 1111910 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 20:25:18.964226 1111910 start.go:297] selected driver: kvm2
	I0731 20:25:18.964238 1111910 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:25:18.964249 1111910 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:25:18.965062 1111910 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:25:18.965145 1111910 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:25:18.980874 1111910 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:25:18.980962 1111910 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 20:25:18.981255 1111910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:25:18.981311 1111910 cni.go:84] Creating CNI manager for ""
	I0731 20:25:18.981329 1111910 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 20:25:18.981339 1111910 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 20:25:18.981451 1111910 start.go:340] cluster config:
	{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 20:25:18.981584 1111910 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:25:18.984220 1111910 out.go:177] * Starting "ha-430887" primary control-plane node in "ha-430887" cluster
	I0731 20:25:18.985418 1111910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:25:18.985463 1111910 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:25:18.985477 1111910 cache.go:56] Caching tarball of preloaded images
	I0731 20:25:18.985588 1111910 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:25:18.985601 1111910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:25:18.986022 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:25:18.986056 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json: {Name:mk4dcae038756b36a484940a0ad4406989974a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:18.986231 1111910 start.go:360] acquireMachinesLock for ha-430887: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:25:18.986278 1111910 start.go:364] duration metric: took 27.698µs to acquireMachinesLock for "ha-430887"
	I0731 20:25:18.986302 1111910 start.go:93] Provisioning new machine with config: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:25:18.986392 1111910 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 20:25:18.988702 1111910 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 20:25:18.988867 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:25:18.988911 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:25:19.004001 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0731 20:25:19.004605 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:25:19.005159 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:25:19.005178 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:25:19.005626 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:25:19.005789 1111910 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:25:19.005966 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:19.006133 1111910 start.go:159] libmachine.API.Create for "ha-430887" (driver="kvm2")
	I0731 20:25:19.006177 1111910 client.go:168] LocalClient.Create starting
	I0731 20:25:19.006217 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 20:25:19.006251 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:25:19.006269 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:25:19.006325 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 20:25:19.006352 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:25:19.006371 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:25:19.006392 1111910 main.go:141] libmachine: Running pre-create checks...
	I0731 20:25:19.006404 1111910 main.go:141] libmachine: (ha-430887) Calling .PreCreateCheck
	I0731 20:25:19.006715 1111910 main.go:141] libmachine: (ha-430887) Calling .GetConfigRaw
	I0731 20:25:19.007118 1111910 main.go:141] libmachine: Creating machine...
	I0731 20:25:19.007136 1111910 main.go:141] libmachine: (ha-430887) Calling .Create
	I0731 20:25:19.007246 1111910 main.go:141] libmachine: (ha-430887) Creating KVM machine...
	I0731 20:25:19.008638 1111910 main.go:141] libmachine: (ha-430887) DBG | found existing default KVM network
	I0731 20:25:19.009392 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.009254 1111933 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d350}
	I0731 20:25:19.009416 1111910 main.go:141] libmachine: (ha-430887) DBG | created network xml: 
	I0731 20:25:19.009429 1111910 main.go:141] libmachine: (ha-430887) DBG | <network>
	I0731 20:25:19.009436 1111910 main.go:141] libmachine: (ha-430887) DBG |   <name>mk-ha-430887</name>
	I0731 20:25:19.009447 1111910 main.go:141] libmachine: (ha-430887) DBG |   <dns enable='no'/>
	I0731 20:25:19.009456 1111910 main.go:141] libmachine: (ha-430887) DBG |   
	I0731 20:25:19.009467 1111910 main.go:141] libmachine: (ha-430887) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 20:25:19.009478 1111910 main.go:141] libmachine: (ha-430887) DBG |     <dhcp>
	I0731 20:25:19.009503 1111910 main.go:141] libmachine: (ha-430887) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 20:25:19.009538 1111910 main.go:141] libmachine: (ha-430887) DBG |     </dhcp>
	I0731 20:25:19.009553 1111910 main.go:141] libmachine: (ha-430887) DBG |   </ip>
	I0731 20:25:19.009560 1111910 main.go:141] libmachine: (ha-430887) DBG |   
	I0731 20:25:19.009570 1111910 main.go:141] libmachine: (ha-430887) DBG | </network>
	I0731 20:25:19.009578 1111910 main.go:141] libmachine: (ha-430887) DBG | 
	I0731 20:25:19.014449 1111910 main.go:141] libmachine: (ha-430887) DBG | trying to create private KVM network mk-ha-430887 192.168.39.0/24...
	I0731 20:25:19.080321 1111910 main.go:141] libmachine: (ha-430887) DBG | private KVM network mk-ha-430887 192.168.39.0/24 created
	I0731 20:25:19.080363 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.080257 1111933 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:25:19.080379 1111910 main.go:141] libmachine: (ha-430887) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887 ...
	I0731 20:25:19.080397 1111910 main.go:141] libmachine: (ha-430887) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:25:19.080416 1111910 main.go:141] libmachine: (ha-430887) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 20:25:19.367276 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.367138 1111933 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa...
	I0731 20:25:19.586177 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.586061 1111933 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/ha-430887.rawdisk...
	I0731 20:25:19.586206 1111910 main.go:141] libmachine: (ha-430887) DBG | Writing magic tar header
	I0731 20:25:19.586221 1111910 main.go:141] libmachine: (ha-430887) DBG | Writing SSH key tar header
	I0731 20:25:19.586239 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.586206 1111933 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887 ...
	I0731 20:25:19.586389 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887
	I0731 20:25:19.586416 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 20:25:19.586428 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887 (perms=drwx------)
	I0731 20:25:19.586439 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:25:19.586449 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 20:25:19.586461 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 20:25:19.586470 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:25:19.586483 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:25:19.586491 1111910 main.go:141] libmachine: (ha-430887) Creating domain...
	I0731 20:25:19.586502 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:25:19.586518 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 20:25:19.586545 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:25:19.586559 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:25:19.586567 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home
	I0731 20:25:19.586602 1111910 main.go:141] libmachine: (ha-430887) DBG | Skipping /home - not owner
	I0731 20:25:19.587616 1111910 main.go:141] libmachine: (ha-430887) define libvirt domain using xml: 
	I0731 20:25:19.587642 1111910 main.go:141] libmachine: (ha-430887) <domain type='kvm'>
	I0731 20:25:19.587650 1111910 main.go:141] libmachine: (ha-430887)   <name>ha-430887</name>
	I0731 20:25:19.587658 1111910 main.go:141] libmachine: (ha-430887)   <memory unit='MiB'>2200</memory>
	I0731 20:25:19.587701 1111910 main.go:141] libmachine: (ha-430887)   <vcpu>2</vcpu>
	I0731 20:25:19.587721 1111910 main.go:141] libmachine: (ha-430887)   <features>
	I0731 20:25:19.587735 1111910 main.go:141] libmachine: (ha-430887)     <acpi/>
	I0731 20:25:19.587744 1111910 main.go:141] libmachine: (ha-430887)     <apic/>
	I0731 20:25:19.587752 1111910 main.go:141] libmachine: (ha-430887)     <pae/>
	I0731 20:25:19.587764 1111910 main.go:141] libmachine: (ha-430887)     
	I0731 20:25:19.587773 1111910 main.go:141] libmachine: (ha-430887)   </features>
	I0731 20:25:19.587783 1111910 main.go:141] libmachine: (ha-430887)   <cpu mode='host-passthrough'>
	I0731 20:25:19.587791 1111910 main.go:141] libmachine: (ha-430887)   
	I0731 20:25:19.587799 1111910 main.go:141] libmachine: (ha-430887)   </cpu>
	I0731 20:25:19.587818 1111910 main.go:141] libmachine: (ha-430887)   <os>
	I0731 20:25:19.587836 1111910 main.go:141] libmachine: (ha-430887)     <type>hvm</type>
	I0731 20:25:19.587846 1111910 main.go:141] libmachine: (ha-430887)     <boot dev='cdrom'/>
	I0731 20:25:19.587856 1111910 main.go:141] libmachine: (ha-430887)     <boot dev='hd'/>
	I0731 20:25:19.587867 1111910 main.go:141] libmachine: (ha-430887)     <bootmenu enable='no'/>
	I0731 20:25:19.587886 1111910 main.go:141] libmachine: (ha-430887)   </os>
	I0731 20:25:19.587896 1111910 main.go:141] libmachine: (ha-430887)   <devices>
	I0731 20:25:19.587908 1111910 main.go:141] libmachine: (ha-430887)     <disk type='file' device='cdrom'>
	I0731 20:25:19.587923 1111910 main.go:141] libmachine: (ha-430887)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/boot2docker.iso'/>
	I0731 20:25:19.587936 1111910 main.go:141] libmachine: (ha-430887)       <target dev='hdc' bus='scsi'/>
	I0731 20:25:19.587946 1111910 main.go:141] libmachine: (ha-430887)       <readonly/>
	I0731 20:25:19.587954 1111910 main.go:141] libmachine: (ha-430887)     </disk>
	I0731 20:25:19.587964 1111910 main.go:141] libmachine: (ha-430887)     <disk type='file' device='disk'>
	I0731 20:25:19.587971 1111910 main.go:141] libmachine: (ha-430887)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:25:19.587981 1111910 main.go:141] libmachine: (ha-430887)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/ha-430887.rawdisk'/>
	I0731 20:25:19.587987 1111910 main.go:141] libmachine: (ha-430887)       <target dev='hda' bus='virtio'/>
	I0731 20:25:19.587997 1111910 main.go:141] libmachine: (ha-430887)     </disk>
	I0731 20:25:19.588010 1111910 main.go:141] libmachine: (ha-430887)     <interface type='network'>
	I0731 20:25:19.588026 1111910 main.go:141] libmachine: (ha-430887)       <source network='mk-ha-430887'/>
	I0731 20:25:19.588038 1111910 main.go:141] libmachine: (ha-430887)       <model type='virtio'/>
	I0731 20:25:19.588049 1111910 main.go:141] libmachine: (ha-430887)     </interface>
	I0731 20:25:19.588058 1111910 main.go:141] libmachine: (ha-430887)     <interface type='network'>
	I0731 20:25:19.588069 1111910 main.go:141] libmachine: (ha-430887)       <source network='default'/>
	I0731 20:25:19.588081 1111910 main.go:141] libmachine: (ha-430887)       <model type='virtio'/>
	I0731 20:25:19.588102 1111910 main.go:141] libmachine: (ha-430887)     </interface>
	I0731 20:25:19.588114 1111910 main.go:141] libmachine: (ha-430887)     <serial type='pty'>
	I0731 20:25:19.588129 1111910 main.go:141] libmachine: (ha-430887)       <target port='0'/>
	I0731 20:25:19.588143 1111910 main.go:141] libmachine: (ha-430887)     </serial>
	I0731 20:25:19.588155 1111910 main.go:141] libmachine: (ha-430887)     <console type='pty'>
	I0731 20:25:19.588168 1111910 main.go:141] libmachine: (ha-430887)       <target type='serial' port='0'/>
	I0731 20:25:19.588179 1111910 main.go:141] libmachine: (ha-430887)     </console>
	I0731 20:25:19.588190 1111910 main.go:141] libmachine: (ha-430887)     <rng model='virtio'>
	I0731 20:25:19.588203 1111910 main.go:141] libmachine: (ha-430887)       <backend model='random'>/dev/random</backend>
	I0731 20:25:19.588216 1111910 main.go:141] libmachine: (ha-430887)     </rng>
	I0731 20:25:19.588227 1111910 main.go:141] libmachine: (ha-430887)     
	I0731 20:25:19.588237 1111910 main.go:141] libmachine: (ha-430887)     
	I0731 20:25:19.588244 1111910 main.go:141] libmachine: (ha-430887)   </devices>
	I0731 20:25:19.588253 1111910 main.go:141] libmachine: (ha-430887) </domain>
	I0731 20:25:19.588263 1111910 main.go:141] libmachine: (ha-430887) 
	I0731 20:25:19.592459 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:4e:c7:83 in network default
	I0731 20:25:19.593045 1111910 main.go:141] libmachine: (ha-430887) Ensuring networks are active...
	I0731 20:25:19.593060 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:19.593738 1111910 main.go:141] libmachine: (ha-430887) Ensuring network default is active
	I0731 20:25:19.594076 1111910 main.go:141] libmachine: (ha-430887) Ensuring network mk-ha-430887 is active
	I0731 20:25:19.594565 1111910 main.go:141] libmachine: (ha-430887) Getting domain xml...
	I0731 20:25:19.595346 1111910 main.go:141] libmachine: (ha-430887) Creating domain...
	I0731 20:25:20.785997 1111910 main.go:141] libmachine: (ha-430887) Waiting to get IP...
	I0731 20:25:20.786882 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:20.787271 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:20.787318 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:20.787268 1111933 retry.go:31] will retry after 288.448441ms: waiting for machine to come up
	I0731 20:25:21.077798 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:21.078186 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:21.078228 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:21.078137 1111933 retry.go:31] will retry after 252.829338ms: waiting for machine to come up
	I0731 20:25:21.332877 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:21.333430 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:21.333451 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:21.333379 1111933 retry.go:31] will retry after 334.800359ms: waiting for machine to come up
	I0731 20:25:21.669873 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:21.670216 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:21.670241 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:21.670168 1111933 retry.go:31] will retry after 472.221199ms: waiting for machine to come up
	I0731 20:25:22.143436 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:22.143930 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:22.143959 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:22.143872 1111933 retry.go:31] will retry after 559.007443ms: waiting for machine to come up
	I0731 20:25:22.704692 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:22.705099 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:22.705130 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:22.705032 1111933 retry.go:31] will retry after 897.504113ms: waiting for machine to come up
	I0731 20:25:23.604024 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:23.604389 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:23.604420 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:23.604347 1111933 retry.go:31] will retry after 1.120126909s: waiting for machine to come up
	I0731 20:25:24.726083 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:24.726625 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:24.726654 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:24.726570 1111933 retry.go:31] will retry after 1.143168622s: waiting for machine to come up
	I0731 20:25:25.870828 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:25.871310 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:25.871342 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:25.871253 1111933 retry.go:31] will retry after 1.606766772s: waiting for machine to come up
	I0731 20:25:27.480277 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:27.480740 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:27.480775 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:27.480678 1111933 retry.go:31] will retry after 1.912815338s: waiting for machine to come up
	I0731 20:25:29.394806 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:29.395236 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:29.395265 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:29.395172 1111933 retry.go:31] will retry after 2.201647109s: waiting for machine to come up
	I0731 20:25:31.599462 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:31.599906 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:31.599936 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:31.599856 1111933 retry.go:31] will retry after 3.569826584s: waiting for machine to come up
	I0731 20:25:35.170903 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:35.171313 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:35.171339 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:35.171261 1111933 retry.go:31] will retry after 3.217563206s: waiting for machine to come up
	I0731 20:25:38.392646 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.393130 1111910 main.go:141] libmachine: (ha-430887) Found IP for machine: 192.168.39.195
	I0731 20:25:38.393159 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has current primary IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.393165 1111910 main.go:141] libmachine: (ha-430887) Reserving static IP address...
	I0731 20:25:38.393561 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find host DHCP lease matching {name: "ha-430887", mac: "52:54:00:10:dc:43", ip: "192.168.39.195"} in network mk-ha-430887
	I0731 20:25:38.468809 1111910 main.go:141] libmachine: (ha-430887) DBG | Getting to WaitForSSH function...
	I0731 20:25:38.468844 1111910 main.go:141] libmachine: (ha-430887) Reserved static IP address: 192.168.39.195
	I0731 20:25:38.468857 1111910 main.go:141] libmachine: (ha-430887) Waiting for SSH to be available...
	I0731 20:25:38.471357 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.471785 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.471816 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.471998 1111910 main.go:141] libmachine: (ha-430887) DBG | Using SSH client type: external
	I0731 20:25:38.472027 1111910 main.go:141] libmachine: (ha-430887) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa (-rw-------)
	I0731 20:25:38.472062 1111910 main.go:141] libmachine: (ha-430887) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:25:38.472079 1111910 main.go:141] libmachine: (ha-430887) DBG | About to run SSH command:
	I0731 20:25:38.472107 1111910 main.go:141] libmachine: (ha-430887) DBG | exit 0
	I0731 20:25:38.595719 1111910 main.go:141] libmachine: (ha-430887) DBG | SSH cmd err, output: <nil>: 
	I0731 20:25:38.595942 1111910 main.go:141] libmachine: (ha-430887) KVM machine creation complete!
	I0731 20:25:38.596288 1111910 main.go:141] libmachine: (ha-430887) Calling .GetConfigRaw
	I0731 20:25:38.596859 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:38.597059 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:38.597195 1111910 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:25:38.597210 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:25:38.598415 1111910 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:25:38.598440 1111910 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:25:38.598448 1111910 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:25:38.598456 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:38.600580 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.600914 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.600936 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.601056 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:38.601245 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.601394 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.601493 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:38.601638 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:38.601836 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:38.601847 1111910 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:25:38.703285 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:25:38.703309 1111910 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:25:38.703316 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:38.706210 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.706585 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.706611 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.706796 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:38.706958 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.707137 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.707252 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:38.707372 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:38.707560 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:38.707571 1111910 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:25:38.808324 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:25:38.808386 1111910 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:25:38.808392 1111910 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:25:38.808400 1111910 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:25:38.808637 1111910 buildroot.go:166] provisioning hostname "ha-430887"
	I0731 20:25:38.808666 1111910 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:25:38.808886 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:38.811473 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.811815 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.811844 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.811959 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:38.812157 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.812313 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.812419 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:38.812597 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:38.812785 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:38.812796 1111910 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430887 && echo "ha-430887" | sudo tee /etc/hostname
	I0731 20:25:38.929052 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887
	
	I0731 20:25:38.929092 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:38.931708 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.932160 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.932186 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.932293 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:38.932504 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.932676 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.932849 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:38.933028 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:38.933254 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:38.933277 1111910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430887/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:25:39.043990 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:25:39.044064 1111910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:25:39.044159 1111910 buildroot.go:174] setting up certificates
	I0731 20:25:39.044173 1111910 provision.go:84] configureAuth start
	I0731 20:25:39.044191 1111910 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:25:39.044484 1111910 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:25:39.047052 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.047439 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.047459 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.047603 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.049597 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.049889 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.049912 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.050034 1111910 provision.go:143] copyHostCerts
	I0731 20:25:39.050061 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:25:39.050093 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:25:39.050103 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:25:39.050190 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:25:39.050311 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:25:39.050338 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:25:39.050347 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:25:39.050385 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:25:39.050462 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:25:39.050500 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:25:39.050509 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:25:39.050563 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:25:39.050673 1111910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.ha-430887 san=[127.0.0.1 192.168.39.195 ha-430887 localhost minikube]
	I0731 20:25:39.123742 1111910 provision.go:177] copyRemoteCerts
	I0731 20:25:39.123801 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:25:39.123836 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.126665 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.126997 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.127017 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.127285 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.127500 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.127702 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.127861 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:25:39.209849 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:25:39.209931 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:25:39.231847 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:25:39.231909 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:25:39.252992 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:25:39.253063 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 20:25:39.274174 1111910 provision.go:87] duration metric: took 229.983854ms to configureAuth
	I0731 20:25:39.274202 1111910 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:25:39.274892 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:25:39.275044 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.278157 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.278558 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.278589 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.278753 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.278935 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.279129 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.279256 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.279428 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:39.279625 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:39.279648 1111910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:25:39.527070 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:25:39.527102 1111910 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:25:39.527109 1111910 main.go:141] libmachine: (ha-430887) Calling .GetURL
	I0731 20:25:39.528415 1111910 main.go:141] libmachine: (ha-430887) DBG | Using libvirt version 6000000
	I0731 20:25:39.530372 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.530721 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.530752 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.530904 1111910 main.go:141] libmachine: Docker is up and running!
	I0731 20:25:39.530918 1111910 main.go:141] libmachine: Reticulating splines...
	I0731 20:25:39.530927 1111910 client.go:171] duration metric: took 20.524737988s to LocalClient.Create
	I0731 20:25:39.530959 1111910 start.go:167] duration metric: took 20.524828329s to libmachine.API.Create "ha-430887"
	I0731 20:25:39.530972 1111910 start.go:293] postStartSetup for "ha-430887" (driver="kvm2")
	I0731 20:25:39.530986 1111910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:25:39.531010 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.531239 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:25:39.531265 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.533320 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.533614 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.533634 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.533814 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.533988 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.534184 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.534321 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:25:39.613502 1111910 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:25:39.617335 1111910 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:25:39.617362 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:25:39.617443 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:25:39.617533 1111910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 20:25:39.617546 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 20:25:39.617665 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:25:39.626250 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:25:39.647722 1111910 start.go:296] duration metric: took 116.738226ms for postStartSetup
	I0731 20:25:39.647774 1111910 main.go:141] libmachine: (ha-430887) Calling .GetConfigRaw
	I0731 20:25:39.648411 1111910 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:25:39.651097 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.651544 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.651571 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.651785 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:25:39.651981 1111910 start.go:128] duration metric: took 20.665577325s to createHost
	I0731 20:25:39.652024 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.654259 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.654574 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.654607 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.654687 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.654874 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.655060 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.655184 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.655346 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:39.655517 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:39.655527 1111910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:25:39.756417 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722457539.730567352
	
	I0731 20:25:39.756440 1111910 fix.go:216] guest clock: 1722457539.730567352
	I0731 20:25:39.756449 1111910 fix.go:229] Guest: 2024-07-31 20:25:39.730567352 +0000 UTC Remote: 2024-07-31 20:25:39.651994642 +0000 UTC m=+20.776148366 (delta=78.57271ms)
	I0731 20:25:39.756492 1111910 fix.go:200] guest clock delta is within tolerance: 78.57271ms
	I0731 20:25:39.756498 1111910 start.go:83] releasing machines lock for "ha-430887", held for 20.77020991s
	I0731 20:25:39.756520 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.756840 1111910 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:25:39.760054 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.760454 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.760481 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.760625 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.761109 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.761293 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.761391 1111910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:25:39.761444 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.761512 1111910 ssh_runner.go:195] Run: cat /version.json
	I0731 20:25:39.761522 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.763886 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.764219 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.764248 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.764315 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.764392 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.764583 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.764723 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.764764 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.764787 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.764871 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:25:39.764971 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.765117 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.765272 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.765439 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:25:39.860419 1111910 ssh_runner.go:195] Run: systemctl --version
	I0731 20:25:39.865997 1111910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:25:40.025196 1111910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:25:40.030535 1111910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:25:40.030610 1111910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:25:40.045476 1111910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:25:40.045510 1111910 start.go:495] detecting cgroup driver to use...
	I0731 20:25:40.045636 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:25:40.060533 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:25:40.073990 1111910 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:25:40.074048 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:25:40.086497 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:25:40.098909 1111910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:25:40.205257 1111910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:25:40.340341 1111910 docker.go:233] disabling docker service ...
	I0731 20:25:40.340439 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:25:40.360998 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:25:40.373434 1111910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:25:40.505802 1111910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:25:40.620887 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:25:40.633833 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:25:40.650441 1111910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:25:40.650505 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.659345 1111910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:25:40.659437 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.668428 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.677102 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.686086 1111910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:25:40.695645 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.704885 1111910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.720254 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.729175 1111910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:25:40.737141 1111910 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:25:40.737200 1111910 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:25:40.747929 1111910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:25:40.756170 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:25:40.867506 1111910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:25:40.990360 1111910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:25:40.990446 1111910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:25:40.994699 1111910 start.go:563] Will wait 60s for crictl version
	I0731 20:25:40.994769 1111910 ssh_runner.go:195] Run: which crictl
	I0731 20:25:40.998197 1111910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:25:41.030740 1111910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:25:41.030869 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:25:41.055604 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:25:41.082034 1111910 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:25:41.083362 1111910 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:25:41.085829 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:41.086170 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:41.086203 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:41.086386 1111910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:25:41.090040 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:25:41.101721 1111910 kubeadm.go:883] updating cluster {Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:25:41.101842 1111910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:25:41.101889 1111910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:25:41.131420 1111910 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:25:41.131504 1111910 ssh_runner.go:195] Run: which lz4
	I0731 20:25:41.135133 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 20:25:41.135241 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:25:41.138935 1111910 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:25:41.138970 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:25:42.318932 1111910 crio.go:462] duration metric: took 1.183725382s to copy over tarball
	I0731 20:25:42.319014 1111910 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:25:44.347688 1111910 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.028636866s)
	I0731 20:25:44.347725 1111910 crio.go:469] duration metric: took 2.028760944s to extract the tarball
	I0731 20:25:44.347736 1111910 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:25:44.383939 1111910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:25:44.426129 1111910 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:25:44.426153 1111910 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:25:44.426162 1111910 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.30.3 crio true true} ...
	I0731 20:25:44.426273 1111910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-430887 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:25:44.426346 1111910 ssh_runner.go:195] Run: crio config
	I0731 20:25:44.467760 1111910 cni.go:84] Creating CNI manager for ""
	I0731 20:25:44.467783 1111910 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 20:25:44.467793 1111910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:25:44.467815 1111910 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430887 NodeName:ha-430887 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:25:44.467970 1111910 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430887"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:25:44.467998 1111910 kube-vip.go:115] generating kube-vip config ...
	I0731 20:25:44.468043 1111910 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 20:25:44.482515 1111910 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 20:25:44.482631 1111910 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 20:25:44.482689 1111910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:25:44.491723 1111910 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:25:44.491791 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 20:25:44.500247 1111910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 20:25:44.514707 1111910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:25:44.528884 1111910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 20:25:44.543184 1111910 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 20:25:44.557714 1111910 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 20:25:44.561008 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:25:44.571667 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:25:44.684801 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:25:44.700340 1111910 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887 for IP: 192.168.39.195
	I0731 20:25:44.700373 1111910 certs.go:194] generating shared ca certs ...
	I0731 20:25:44.700398 1111910 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:44.700614 1111910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:25:44.700679 1111910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:25:44.700692 1111910 certs.go:256] generating profile certs ...
	I0731 20:25:44.700768 1111910 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key
	I0731 20:25:44.700789 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt with IP's: []
	I0731 20:25:44.916462 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt ...
	I0731 20:25:44.916496 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt: {Name:mkd3b433aa6ef2fdcaf6e733c05cf9b7b64071b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:44.916711 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key ...
	I0731 20:25:44.916727 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key: {Name:mke53210658faf7d54674a82834fe27cbb53cd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:44.916857 1111910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.ee5e13cf
	I0731 20:25:44.916880 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.ee5e13cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.254]
	I0731 20:25:45.051228 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.ee5e13cf ...
	I0731 20:25:45.051264 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.ee5e13cf: {Name:mk06a05e571b29664204fa70b015d5d5754cbff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:45.051464 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.ee5e13cf ...
	I0731 20:25:45.051483 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.ee5e13cf: {Name:mk8374603a62e3418a1af38d213a37a82028883f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:45.051600 1111910 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.ee5e13cf -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt
	I0731 20:25:45.051685 1111910 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.ee5e13cf -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key
	I0731 20:25:45.051740 1111910 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key
	I0731 20:25:45.051755 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt with IP's: []
	I0731 20:25:45.291071 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt ...
	I0731 20:25:45.291105 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt: {Name:mkfa0436e509266f42d4575db891252e0ff63705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:45.291301 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key ...
	I0731 20:25:45.291315 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key: {Name:mk0dd3fcece20ff7bede948336cf8b1df95f7897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:45.291419 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:25:45.291439 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:25:45.291450 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:25:45.291464 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:25:45.291476 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:25:45.291489 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:25:45.291501 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:25:45.291512 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:25:45.291563 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 20:25:45.291608 1111910 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 20:25:45.291619 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:25:45.291642 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:25:45.291696 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:25:45.291727 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:25:45.291767 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:25:45.291795 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:25:45.291808 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 20:25:45.291821 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 20:25:45.292393 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:25:45.315526 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:25:45.336674 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:25:45.358096 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:25:45.379514 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:25:45.400799 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:25:45.421664 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:25:45.444971 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:25:45.473815 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:25:45.509793 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 20:25:45.535602 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 20:25:45.558953 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:25:45.575357 1111910 ssh_runner.go:195] Run: openssl version
	I0731 20:25:45.580647 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 20:25:45.591849 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 20:25:45.596010 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 20:25:45.596054 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 20:25:45.601397 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:25:45.611106 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:25:45.620611 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:25:45.624539 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:25:45.624575 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:25:45.629427 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:25:45.639303 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 20:25:45.648962 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 20:25:45.652748 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 20:25:45.652792 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 20:25:45.657795 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 20:25:45.667454 1111910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:25:45.671010 1111910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:25:45.671063 1111910 kubeadm.go:392] StartCluster: {Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:25:45.671143 1111910 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:25:45.671192 1111910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:25:45.704543 1111910 cri.go:89] found id: ""
	I0731 20:25:45.704632 1111910 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:25:45.714330 1111910 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:25:45.723384 1111910 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:25:45.732027 1111910 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:25:45.732047 1111910 kubeadm.go:157] found existing configuration files:
	
	I0731 20:25:45.732104 1111910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:25:45.740154 1111910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:25:45.740220 1111910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:25:45.748745 1111910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:25:45.756896 1111910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:25:45.756956 1111910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:25:45.765270 1111910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:25:45.773288 1111910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:25:45.773335 1111910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:25:45.781940 1111910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:25:45.789961 1111910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:25:45.790020 1111910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:25:45.798408 1111910 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 20:25:46.002590 1111910 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 20:25:57.171147 1111910 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 20:25:57.171233 1111910 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 20:25:57.171348 1111910 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 20:25:57.171508 1111910 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 20:25:57.171623 1111910 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 20:25:57.171691 1111910 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 20:25:57.173230 1111910 out.go:204]   - Generating certificates and keys ...
	I0731 20:25:57.173293 1111910 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 20:25:57.173350 1111910 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 20:25:57.173436 1111910 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 20:25:57.173492 1111910 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 20:25:57.173542 1111910 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 20:25:57.173585 1111910 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 20:25:57.173630 1111910 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 20:25:57.173745 1111910 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-430887 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0731 20:25:57.173789 1111910 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 20:25:57.173926 1111910 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-430887 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0731 20:25:57.174025 1111910 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 20:25:57.174120 1111910 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 20:25:57.174196 1111910 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 20:25:57.174279 1111910 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 20:25:57.174344 1111910 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 20:25:57.174420 1111910 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 20:25:57.174496 1111910 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 20:25:57.174593 1111910 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 20:25:57.174644 1111910 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 20:25:57.174730 1111910 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 20:25:57.174837 1111910 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 20:25:57.176268 1111910 out.go:204]   - Booting up control plane ...
	I0731 20:25:57.176360 1111910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 20:25:57.176429 1111910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 20:25:57.176484 1111910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 20:25:57.176580 1111910 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 20:25:57.176668 1111910 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 20:25:57.176702 1111910 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 20:25:57.176809 1111910 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 20:25:57.176906 1111910 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 20:25:57.177004 1111910 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.56619ms
	I0731 20:25:57.177067 1111910 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 20:25:57.177122 1111910 kubeadm.go:310] [api-check] The API server is healthy after 6.124767423s
	I0731 20:25:57.177206 1111910 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 20:25:57.177315 1111910 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 20:25:57.177401 1111910 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 20:25:57.177580 1111910 kubeadm.go:310] [mark-control-plane] Marking the node ha-430887 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 20:25:57.177630 1111910 kubeadm.go:310] [bootstrap-token] Using token: tzik02.6j5yn2d1mg1f7i4r
	I0731 20:25:57.178808 1111910 out.go:204]   - Configuring RBAC rules ...
	I0731 20:25:57.178901 1111910 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 20:25:57.178969 1111910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 20:25:57.179085 1111910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 20:25:57.179188 1111910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 20:25:57.179295 1111910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 20:25:57.179380 1111910 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 20:25:57.179476 1111910 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 20:25:57.179514 1111910 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 20:25:57.179558 1111910 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 20:25:57.179564 1111910 kubeadm.go:310] 
	I0731 20:25:57.179614 1111910 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 20:25:57.179623 1111910 kubeadm.go:310] 
	I0731 20:25:57.179688 1111910 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 20:25:57.179697 1111910 kubeadm.go:310] 
	I0731 20:25:57.179727 1111910 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 20:25:57.179777 1111910 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 20:25:57.179819 1111910 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 20:25:57.179830 1111910 kubeadm.go:310] 
	I0731 20:25:57.179878 1111910 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 20:25:57.179884 1111910 kubeadm.go:310] 
	I0731 20:25:57.179928 1111910 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 20:25:57.179934 1111910 kubeadm.go:310] 
	I0731 20:25:57.179977 1111910 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 20:25:57.180045 1111910 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 20:25:57.180122 1111910 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 20:25:57.180133 1111910 kubeadm.go:310] 
	I0731 20:25:57.180202 1111910 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 20:25:57.180317 1111910 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 20:25:57.180331 1111910 kubeadm.go:310] 
	I0731 20:25:57.180441 1111910 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tzik02.6j5yn2d1mg1f7i4r \
	I0731 20:25:57.180562 1111910 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 20:25:57.180585 1111910 kubeadm.go:310] 	--control-plane 
	I0731 20:25:57.180591 1111910 kubeadm.go:310] 
	I0731 20:25:57.180662 1111910 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 20:25:57.180669 1111910 kubeadm.go:310] 
	I0731 20:25:57.180746 1111910 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tzik02.6j5yn2d1mg1f7i4r \
	I0731 20:25:57.180850 1111910 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 20:25:57.180867 1111910 cni.go:84] Creating CNI manager for ""
	I0731 20:25:57.180876 1111910 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 20:25:57.182379 1111910 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 20:25:57.183560 1111910 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 20:25:57.188768 1111910 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 20:25:57.188785 1111910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0731 20:25:57.208766 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 20:25:57.549116 1111910 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:25:57.549195 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:57.549229 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-430887 minikube.k8s.io/updated_at=2024_07_31T20_25_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=ha-430887 minikube.k8s.io/primary=true
	I0731 20:25:57.705734 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:57.710642 1111910 ops.go:34] apiserver oom_adj: -16
	I0731 20:25:58.205724 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:58.706404 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:59.206744 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:59.705751 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:00.205902 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:00.705972 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:01.205739 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:01.705881 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:02.206605 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:02.705761 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:03.205897 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:03.706124 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:04.206228 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:04.705877 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:05.205810 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:05.706578 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:06.206135 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:06.705686 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:07.205883 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:07.706190 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:08.206737 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:08.706316 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:09.206116 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:09.706075 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:10.205939 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:10.706353 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:11.206096 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:11.706733 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:11.814485 1111910 kubeadm.go:1113] duration metric: took 14.265357492s to wait for elevateKubeSystemPrivileges
	I0731 20:26:11.814529 1111910 kubeadm.go:394] duration metric: took 26.143472383s to StartCluster
	I0731 20:26:11.814548 1111910 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:11.814642 1111910 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:26:11.815550 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:11.815810 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 20:26:11.815812 1111910 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:26:11.815838 1111910 start.go:241] waiting for startup goroutines ...
	I0731 20:26:11.815855 1111910 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:26:11.815924 1111910 addons.go:69] Setting storage-provisioner=true in profile "ha-430887"
	I0731 20:26:11.815951 1111910 addons.go:69] Setting default-storageclass=true in profile "ha-430887"
	I0731 20:26:11.816007 1111910 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430887"
	I0731 20:26:11.815959 1111910 addons.go:234] Setting addon storage-provisioner=true in "ha-430887"
	I0731 20:26:11.816078 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:26:11.816121 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:26:11.816452 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.816461 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.816483 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.816486 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.832234 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35407
	I0731 20:26:11.832298 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0731 20:26:11.832752 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.832784 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.833284 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.833295 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.833309 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.833318 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.833654 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.833681 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.833862 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:26:11.834196 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.834221 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.836583 1111910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:26:11.836928 1111910 kapi.go:59] client config for ha-430887: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt", KeyFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key", CAFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 20:26:11.837460 1111910 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 20:26:11.837754 1111910 addons.go:234] Setting addon default-storageclass=true in "ha-430887"
	I0731 20:26:11.837806 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:26:11.838191 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.838226 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.849865 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34029
	I0731 20:26:11.850439 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.850975 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.851004 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.851383 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.851595 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:26:11.853341 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:26:11.855983 1111910 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:26:11.856785 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
	I0731 20:26:11.857211 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.857321 1111910 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:26:11.857341 1111910 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:26:11.857362 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:26:11.857747 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.857767 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.858094 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.858683 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.858727 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.860796 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:11.861302 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:26:11.861352 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:11.861643 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:26:11.861813 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:26:11.861997 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:26:11.862127 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:26:11.874601 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45765
	I0731 20:26:11.875023 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.875679 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.875702 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.876130 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.876335 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:26:11.878191 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:26:11.878427 1111910 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:26:11.878443 1111910 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:26:11.878458 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:26:11.881350 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:11.881786 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:26:11.881810 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:11.882057 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:26:11.882231 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:26:11.882396 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:26:11.882530 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:26:11.918245 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 20:26:12.021622 1111910 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:26:12.051312 1111910 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:26:12.331501 1111910 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 20:26:12.440968 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.440996 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.441358 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.441382 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.441402 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.441404 1111910 main.go:141] libmachine: (ha-430887) DBG | Closing plugin on server side
	I0731 20:26:12.441416 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.441688 1111910 main.go:141] libmachine: (ha-430887) DBG | Closing plugin on server side
	I0731 20:26:12.441750 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.441776 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.441908 1111910 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 20:26:12.441919 1111910 round_trippers.go:469] Request Headers:
	I0731 20:26:12.441929 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:26:12.441937 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:26:12.455968 1111910 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 20:26:12.456876 1111910 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 20:26:12.456896 1111910 round_trippers.go:469] Request Headers:
	I0731 20:26:12.456909 1111910 round_trippers.go:473]     Content-Type: application/json
	I0731 20:26:12.456919 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:26:12.456928 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:26:12.463864 1111910 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 20:26:12.464077 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.464106 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.464446 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.464466 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.623094 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.623118 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.623466 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.623486 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.623502 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.623512 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.624198 1111910 main.go:141] libmachine: (ha-430887) DBG | Closing plugin on server side
	I0731 20:26:12.624218 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.624233 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.625936 1111910 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0731 20:26:12.627141 1111910 addons.go:510] duration metric: took 811.286894ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0731 20:26:12.627189 1111910 start.go:246] waiting for cluster config update ...
	I0731 20:26:12.627205 1111910 start.go:255] writing updated cluster config ...
	I0731 20:26:12.628783 1111910 out.go:177] 
	I0731 20:26:12.630067 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:26:12.630201 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:26:12.631737 1111910 out.go:177] * Starting "ha-430887-m02" control-plane node in "ha-430887" cluster
	I0731 20:26:12.633162 1111910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:26:12.633190 1111910 cache.go:56] Caching tarball of preloaded images
	I0731 20:26:12.633287 1111910 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:26:12.633305 1111910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:26:12.633364 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:26:12.633652 1111910 start.go:360] acquireMachinesLock for ha-430887-m02: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:26:12.633704 1111910 start.go:364] duration metric: took 29.367µs to acquireMachinesLock for "ha-430887-m02"
	I0731 20:26:12.633734 1111910 start.go:93] Provisioning new machine with config: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:26:12.633803 1111910 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 20:26:12.635315 1111910 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 20:26:12.635397 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:12.635422 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:12.650971 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34673
	I0731 20:26:12.651504 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:12.652061 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:12.652110 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:12.652503 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:12.652715 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetMachineName
	I0731 20:26:12.652869 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:12.653025 1111910 start.go:159] libmachine.API.Create for "ha-430887" (driver="kvm2")
	I0731 20:26:12.653054 1111910 client.go:168] LocalClient.Create starting
	I0731 20:26:12.653091 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 20:26:12.653134 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:26:12.653155 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:26:12.653236 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 20:26:12.653269 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:26:12.653286 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:26:12.653310 1111910 main.go:141] libmachine: Running pre-create checks...
	I0731 20:26:12.653321 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .PreCreateCheck
	I0731 20:26:12.653535 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetConfigRaw
	I0731 20:26:12.653996 1111910 main.go:141] libmachine: Creating machine...
	I0731 20:26:12.654017 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .Create
	I0731 20:26:12.654210 1111910 main.go:141] libmachine: (ha-430887-m02) Creating KVM machine...
	I0731 20:26:12.655537 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found existing default KVM network
	I0731 20:26:12.655682 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found existing private KVM network mk-ha-430887
	I0731 20:26:12.655842 1111910 main.go:141] libmachine: (ha-430887-m02) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02 ...
	I0731 20:26:12.655869 1111910 main.go:141] libmachine: (ha-430887-m02) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:26:12.655944 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:12.655833 1112279 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:26:12.656065 1111910 main.go:141] libmachine: (ha-430887-m02) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 20:26:12.917937 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:12.917783 1112279 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa...
	I0731 20:26:13.216991 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:13.216842 1112279 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/ha-430887-m02.rawdisk...
	I0731 20:26:13.217040 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Writing magic tar header
	I0731 20:26:13.217051 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Writing SSH key tar header
	I0731 20:26:13.217059 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:13.216956 1112279 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02 ...
	I0731 20:26:13.217155 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02 (perms=drwx------)
	I0731 20:26:13.217180 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:26:13.217194 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02
	I0731 20:26:13.217209 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 20:26:13.217225 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 20:26:13.217238 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:26:13.217249 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 20:26:13.217261 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:26:13.217279 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 20:26:13.217291 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:26:13.217306 1111910 main.go:141] libmachine: (ha-430887-m02) Creating domain...
	I0731 20:26:13.217319 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:26:13.217325 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:26:13.217333 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home
	I0731 20:26:13.217342 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Skipping /home - not owner
	I0731 20:26:13.218273 1111910 main.go:141] libmachine: (ha-430887-m02) define libvirt domain using xml: 
	I0731 20:26:13.218300 1111910 main.go:141] libmachine: (ha-430887-m02) <domain type='kvm'>
	I0731 20:26:13.218310 1111910 main.go:141] libmachine: (ha-430887-m02)   <name>ha-430887-m02</name>
	I0731 20:26:13.218315 1111910 main.go:141] libmachine: (ha-430887-m02)   <memory unit='MiB'>2200</memory>
	I0731 20:26:13.218321 1111910 main.go:141] libmachine: (ha-430887-m02)   <vcpu>2</vcpu>
	I0731 20:26:13.218326 1111910 main.go:141] libmachine: (ha-430887-m02)   <features>
	I0731 20:26:13.218334 1111910 main.go:141] libmachine: (ha-430887-m02)     <acpi/>
	I0731 20:26:13.218343 1111910 main.go:141] libmachine: (ha-430887-m02)     <apic/>
	I0731 20:26:13.218352 1111910 main.go:141] libmachine: (ha-430887-m02)     <pae/>
	I0731 20:26:13.218362 1111910 main.go:141] libmachine: (ha-430887-m02)     
	I0731 20:26:13.218368 1111910 main.go:141] libmachine: (ha-430887-m02)   </features>
	I0731 20:26:13.218373 1111910 main.go:141] libmachine: (ha-430887-m02)   <cpu mode='host-passthrough'>
	I0731 20:26:13.218378 1111910 main.go:141] libmachine: (ha-430887-m02)   
	I0731 20:26:13.218385 1111910 main.go:141] libmachine: (ha-430887-m02)   </cpu>
	I0731 20:26:13.218391 1111910 main.go:141] libmachine: (ha-430887-m02)   <os>
	I0731 20:26:13.218397 1111910 main.go:141] libmachine: (ha-430887-m02)     <type>hvm</type>
	I0731 20:26:13.218437 1111910 main.go:141] libmachine: (ha-430887-m02)     <boot dev='cdrom'/>
	I0731 20:26:13.218465 1111910 main.go:141] libmachine: (ha-430887-m02)     <boot dev='hd'/>
	I0731 20:26:13.218477 1111910 main.go:141] libmachine: (ha-430887-m02)     <bootmenu enable='no'/>
	I0731 20:26:13.218487 1111910 main.go:141] libmachine: (ha-430887-m02)   </os>
	I0731 20:26:13.218495 1111910 main.go:141] libmachine: (ha-430887-m02)   <devices>
	I0731 20:26:13.218506 1111910 main.go:141] libmachine: (ha-430887-m02)     <disk type='file' device='cdrom'>
	I0731 20:26:13.218522 1111910 main.go:141] libmachine: (ha-430887-m02)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/boot2docker.iso'/>
	I0731 20:26:13.218537 1111910 main.go:141] libmachine: (ha-430887-m02)       <target dev='hdc' bus='scsi'/>
	I0731 20:26:13.218549 1111910 main.go:141] libmachine: (ha-430887-m02)       <readonly/>
	I0731 20:26:13.218560 1111910 main.go:141] libmachine: (ha-430887-m02)     </disk>
	I0731 20:26:13.218575 1111910 main.go:141] libmachine: (ha-430887-m02)     <disk type='file' device='disk'>
	I0731 20:26:13.218587 1111910 main.go:141] libmachine: (ha-430887-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:26:13.218603 1111910 main.go:141] libmachine: (ha-430887-m02)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/ha-430887-m02.rawdisk'/>
	I0731 20:26:13.218613 1111910 main.go:141] libmachine: (ha-430887-m02)       <target dev='hda' bus='virtio'/>
	I0731 20:26:13.218625 1111910 main.go:141] libmachine: (ha-430887-m02)     </disk>
	I0731 20:26:13.218636 1111910 main.go:141] libmachine: (ha-430887-m02)     <interface type='network'>
	I0731 20:26:13.218646 1111910 main.go:141] libmachine: (ha-430887-m02)       <source network='mk-ha-430887'/>
	I0731 20:26:13.218657 1111910 main.go:141] libmachine: (ha-430887-m02)       <model type='virtio'/>
	I0731 20:26:13.218669 1111910 main.go:141] libmachine: (ha-430887-m02)     </interface>
	I0731 20:26:13.218677 1111910 main.go:141] libmachine: (ha-430887-m02)     <interface type='network'>
	I0731 20:26:13.218689 1111910 main.go:141] libmachine: (ha-430887-m02)       <source network='default'/>
	I0731 20:26:13.218699 1111910 main.go:141] libmachine: (ha-430887-m02)       <model type='virtio'/>
	I0731 20:26:13.218711 1111910 main.go:141] libmachine: (ha-430887-m02)     </interface>
	I0731 20:26:13.218725 1111910 main.go:141] libmachine: (ha-430887-m02)     <serial type='pty'>
	I0731 20:26:13.218736 1111910 main.go:141] libmachine: (ha-430887-m02)       <target port='0'/>
	I0731 20:26:13.218744 1111910 main.go:141] libmachine: (ha-430887-m02)     </serial>
	I0731 20:26:13.218767 1111910 main.go:141] libmachine: (ha-430887-m02)     <console type='pty'>
	I0731 20:26:13.218779 1111910 main.go:141] libmachine: (ha-430887-m02)       <target type='serial' port='0'/>
	I0731 20:26:13.218788 1111910 main.go:141] libmachine: (ha-430887-m02)     </console>
	I0731 20:26:13.218802 1111910 main.go:141] libmachine: (ha-430887-m02)     <rng model='virtio'>
	I0731 20:26:13.218816 1111910 main.go:141] libmachine: (ha-430887-m02)       <backend model='random'>/dev/random</backend>
	I0731 20:26:13.218826 1111910 main.go:141] libmachine: (ha-430887-m02)     </rng>
	I0731 20:26:13.218834 1111910 main.go:141] libmachine: (ha-430887-m02)     
	I0731 20:26:13.218840 1111910 main.go:141] libmachine: (ha-430887-m02)     
	I0731 20:26:13.218849 1111910 main.go:141] libmachine: (ha-430887-m02)   </devices>
	I0731 20:26:13.218858 1111910 main.go:141] libmachine: (ha-430887-m02) </domain>
	I0731 20:26:13.218899 1111910 main.go:141] libmachine: (ha-430887-m02) 
	I0731 20:26:13.225601 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:36:d8:14 in network default
	I0731 20:26:13.226161 1111910 main.go:141] libmachine: (ha-430887-m02) Ensuring networks are active...
	I0731 20:26:13.226183 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:13.226973 1111910 main.go:141] libmachine: (ha-430887-m02) Ensuring network default is active
	I0731 20:26:13.227321 1111910 main.go:141] libmachine: (ha-430887-m02) Ensuring network mk-ha-430887 is active
	I0731 20:26:13.227759 1111910 main.go:141] libmachine: (ha-430887-m02) Getting domain xml...
	I0731 20:26:13.228437 1111910 main.go:141] libmachine: (ha-430887-m02) Creating domain...
	I0731 20:26:14.505856 1111910 main.go:141] libmachine: (ha-430887-m02) Waiting to get IP...
	I0731 20:26:14.506750 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:14.507126 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:14.507182 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:14.507110 1112279 retry.go:31] will retry after 296.364136ms: waiting for machine to come up
	I0731 20:26:14.804694 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:14.805270 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:14.805305 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:14.805178 1112279 retry.go:31] will retry after 242.235382ms: waiting for machine to come up
	I0731 20:26:15.048741 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:15.049157 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:15.049191 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:15.049099 1112279 retry.go:31] will retry after 344.680901ms: waiting for machine to come up
	I0731 20:26:15.395869 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:15.396306 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:15.396334 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:15.396271 1112279 retry.go:31] will retry after 392.20081ms: waiting for machine to come up
	I0731 20:26:15.789746 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:15.790090 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:15.790141 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:15.790062 1112279 retry.go:31] will retry after 734.361712ms: waiting for machine to come up
	I0731 20:26:16.526332 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:16.526806 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:16.526838 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:16.526741 1112279 retry.go:31] will retry after 852.201503ms: waiting for machine to come up
	I0731 20:26:17.380742 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:17.381140 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:17.381168 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:17.381097 1112279 retry.go:31] will retry after 717.122097ms: waiting for machine to come up
	I0731 20:26:18.100265 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:18.100650 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:18.100680 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:18.100596 1112279 retry.go:31] will retry after 1.021652149s: waiting for machine to come up
	I0731 20:26:19.124644 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:19.125147 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:19.125179 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:19.125088 1112279 retry.go:31] will retry after 1.407259848s: waiting for machine to come up
	I0731 20:26:20.534586 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:20.535070 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:20.535095 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:20.535045 1112279 retry.go:31] will retry after 1.618860446s: waiting for machine to come up
	I0731 20:26:22.155990 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:22.156574 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:22.156601 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:22.156531 1112279 retry.go:31] will retry after 2.562240882s: waiting for machine to come up
	I0731 20:26:24.721742 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:24.722132 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:24.722155 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:24.722089 1112279 retry.go:31] will retry after 2.774660653s: waiting for machine to come up
	I0731 20:26:27.497869 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:27.498288 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:27.498320 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:27.498231 1112279 retry.go:31] will retry after 3.183060561s: waiting for machine to come up
	I0731 20:26:30.685033 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:30.685443 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:30.685470 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:30.685403 1112279 retry.go:31] will retry after 4.312733669s: waiting for machine to come up
	I0731 20:26:35.000851 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:35.001268 1111910 main.go:141] libmachine: (ha-430887-m02) Found IP for machine: 192.168.39.149
	I0731 20:26:35.001293 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has current primary IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:35.001300 1111910 main.go:141] libmachine: (ha-430887-m02) Reserving static IP address...
	I0731 20:26:35.001563 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find host DHCP lease matching {name: "ha-430887-m02", mac: "52:54:00:4a:64:33", ip: "192.168.39.149"} in network mk-ha-430887
	I0731 20:26:35.076504 1111910 main.go:141] libmachine: (ha-430887-m02) Reserved static IP address: 192.168.39.149
	I0731 20:26:35.076540 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Getting to WaitForSSH function...
	I0731 20:26:35.076574 1111910 main.go:141] libmachine: (ha-430887-m02) Waiting for SSH to be available...
	I0731 20:26:35.079205 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:35.079512 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887
	I0731 20:26:35.079543 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find defined IP address of network mk-ha-430887 interface with MAC address 52:54:00:4a:64:33
	I0731 20:26:35.079662 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using SSH client type: external
	I0731 20:26:35.079694 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa (-rw-------)
	I0731 20:26:35.079723 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:26:35.079736 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | About to run SSH command:
	I0731 20:26:35.079753 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | exit 0
	I0731 20:26:35.083370 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | SSH cmd err, output: exit status 255: 
	I0731 20:26:35.083391 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 20:26:35.083398 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | command : exit 0
	I0731 20:26:35.083403 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | err     : exit status 255
	I0731 20:26:35.083410 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | output  : 
	I0731 20:26:38.083891 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Getting to WaitForSSH function...
	I0731 20:26:38.086581 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.086953 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.086978 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.087099 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using SSH client type: external
	I0731 20:26:38.087125 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa (-rw-------)
	I0731 20:26:38.087148 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:26:38.087158 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | About to run SSH command:
	I0731 20:26:38.087167 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | exit 0
	I0731 20:26:38.212271 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 20:26:38.212558 1111910 main.go:141] libmachine: (ha-430887-m02) KVM machine creation complete!
	I0731 20:26:38.212908 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetConfigRaw
	I0731 20:26:38.213562 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:38.213771 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:38.214043 1111910 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:26:38.214075 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:26:38.215533 1111910 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:26:38.215548 1111910 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:26:38.215554 1111910 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:26:38.215560 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.218056 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.218478 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.218521 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.218660 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.218830 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.219013 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.219127 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.219302 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.219523 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.219534 1111910 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:26:38.323321 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:26:38.323360 1111910 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:26:38.323374 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.326362 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.326782 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.326805 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.326987 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.327332 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.327572 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.327734 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.327890 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.328120 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.328135 1111910 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:26:38.432530 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:26:38.432593 1111910 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:26:38.432600 1111910 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:26:38.432611 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetMachineName
	I0731 20:26:38.432916 1111910 buildroot.go:166] provisioning hostname "ha-430887-m02"
	I0731 20:26:38.432945 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetMachineName
	I0731 20:26:38.433184 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.435455 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.435833 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.435862 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.435994 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.436194 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.436346 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.436499 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.436640 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.436827 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.436842 1111910 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430887-m02 && echo "ha-430887-m02" | sudo tee /etc/hostname
	I0731 20:26:38.553520 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887-m02
	
	I0731 20:26:38.553553 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.556489 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.556883 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.556907 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.557139 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.557407 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.557578 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.557748 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.557917 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.558091 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.558117 1111910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430887-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430887-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430887-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:26:38.672592 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:26:38.672628 1111910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:26:38.672654 1111910 buildroot.go:174] setting up certificates
	I0731 20:26:38.672664 1111910 provision.go:84] configureAuth start
	I0731 20:26:38.672674 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetMachineName
	I0731 20:26:38.673070 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:26:38.676175 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.676534 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.676561 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.676727 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.678998 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.679349 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.679388 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.679517 1111910 provision.go:143] copyHostCerts
	I0731 20:26:38.679572 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:26:38.679609 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:26:38.679617 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:26:38.679686 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:26:38.679765 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:26:38.679784 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:26:38.679791 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:26:38.679817 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:26:38.679878 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:26:38.679899 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:26:38.679905 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:26:38.679929 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:26:38.680027 1111910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.ha-430887-m02 san=[127.0.0.1 192.168.39.149 ha-430887-m02 localhost minikube]
	I0731 20:26:38.823781 1111910 provision.go:177] copyRemoteCerts
	I0731 20:26:38.823860 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:26:38.823892 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.826428 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.826784 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.826820 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.826993 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.827203 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.827377 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.827502 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:26:38.909710 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:26:38.909804 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 20:26:38.932062 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:26:38.932158 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:26:38.953382 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:26:38.953449 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:26:38.974894 1111910 provision.go:87] duration metric: took 302.215899ms to configureAuth
	I0731 20:26:38.974923 1111910 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:26:38.975151 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:26:38.975241 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.977685 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.977954 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.977983 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.978168 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.978377 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.978532 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.978655 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.978803 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.978962 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.978975 1111910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:26:39.229077 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:26:39.229109 1111910 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:26:39.229119 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetURL
	I0731 20:26:39.230419 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using libvirt version 6000000
	I0731 20:26:39.233095 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.233462 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.233489 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.233653 1111910 main.go:141] libmachine: Docker is up and running!
	I0731 20:26:39.233666 1111910 main.go:141] libmachine: Reticulating splines...
	I0731 20:26:39.233673 1111910 client.go:171] duration metric: took 26.580611093s to LocalClient.Create
	I0731 20:26:39.233696 1111910 start.go:167] duration metric: took 26.580674342s to libmachine.API.Create "ha-430887"
	I0731 20:26:39.233707 1111910 start.go:293] postStartSetup for "ha-430887-m02" (driver="kvm2")
	I0731 20:26:39.233724 1111910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:26:39.233750 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.234011 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:26:39.234045 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:39.236209 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.236586 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.236615 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.236732 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:39.236933 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.237099 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:39.237244 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:26:39.318731 1111910 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:26:39.322681 1111910 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:26:39.322711 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:26:39.322782 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:26:39.322854 1111910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 20:26:39.322865 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 20:26:39.322950 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:26:39.331783 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:26:39.353724 1111910 start.go:296] duration metric: took 119.998484ms for postStartSetup
	I0731 20:26:39.353783 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetConfigRaw
	I0731 20:26:39.354389 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:26:39.357184 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.357566 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.357598 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.357831 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:26:39.358050 1111910 start.go:128] duration metric: took 26.7242363s to createHost
	I0731 20:26:39.358079 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:39.360308 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.360693 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.360717 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.360871 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:39.361052 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.361207 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.361416 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:39.361609 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:39.361792 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:39.361802 1111910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:26:39.464485 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722457599.444996017
	
	I0731 20:26:39.464522 1111910 fix.go:216] guest clock: 1722457599.444996017
	I0731 20:26:39.464532 1111910 fix.go:229] Guest: 2024-07-31 20:26:39.444996017 +0000 UTC Remote: 2024-07-31 20:26:39.358065032 +0000 UTC m=+80.482218756 (delta=86.930985ms)
	I0731 20:26:39.464556 1111910 fix.go:200] guest clock delta is within tolerance: 86.930985ms
	I0731 20:26:39.464564 1111910 start.go:83] releasing machines lock for "ha-430887-m02", held for 26.830842141s
	I0731 20:26:39.464589 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.464910 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:26:39.467580 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.467956 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.467999 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.470219 1111910 out.go:177] * Found network options:
	I0731 20:26:39.471591 1111910 out.go:177]   - NO_PROXY=192.168.39.195
	W0731 20:26:39.472689 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 20:26:39.472726 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.473291 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.473520 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.473637 1111910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:26:39.473717 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	W0731 20:26:39.473751 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 20:26:39.473831 1111910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:26:39.473853 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:39.476177 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.476531 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.476559 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.476618 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.476715 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:39.476891 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.477027 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.477052 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.477081 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:39.477194 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:39.477276 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:26:39.477329 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.477469 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:39.477607 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:26:39.711016 1111910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:26:39.716793 1111910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:26:39.716870 1111910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:26:39.734638 1111910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:26:39.734663 1111910 start.go:495] detecting cgroup driver to use...
	I0731 20:26:39.734743 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:26:39.753165 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:26:39.767886 1111910 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:26:39.767973 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:26:39.782152 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:26:39.796003 1111910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:26:39.913306 1111910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:26:40.048365 1111910 docker.go:233] disabling docker service ...
	I0731 20:26:40.048455 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:26:40.061809 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:26:40.073628 1111910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:26:40.207469 1111910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:26:40.337871 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:26:40.351142 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:26:40.368003 1111910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:26:40.368081 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.377567 1111910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:26:40.377645 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.387453 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.396923 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.406054 1111910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:26:40.415550 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.424726 1111910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.440278 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.449650 1111910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:26:40.457983 1111910 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:26:40.458061 1111910 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:26:40.469947 1111910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:26:40.482291 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:26:40.601248 1111910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:26:40.729252 1111910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:26:40.729330 1111910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:26:40.733471 1111910 start.go:563] Will wait 60s for crictl version
	I0731 20:26:40.733506 1111910 ssh_runner.go:195] Run: which crictl
	I0731 20:26:40.736938 1111910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:26:40.771732 1111910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:26:40.771841 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:26:40.797903 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:26:40.826575 1111910 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:26:40.828330 1111910 out.go:177]   - env NO_PROXY=192.168.39.195
	I0731 20:26:40.829666 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:26:40.832404 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:40.832797 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:40.832823 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:40.833032 1111910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:26:40.836968 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:26:40.848813 1111910 mustload.go:65] Loading cluster: ha-430887
	I0731 20:26:40.849068 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:26:40.849432 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:40.849468 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:40.864534 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36723
	I0731 20:26:40.865033 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:40.865516 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:40.865540 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:40.865856 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:40.866043 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:26:40.867540 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:26:40.867968 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:40.868004 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:40.882742 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0731 20:26:40.883135 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:40.883584 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:40.883616 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:40.883964 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:40.884236 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:26:40.884461 1111910 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887 for IP: 192.168.39.149
	I0731 20:26:40.884474 1111910 certs.go:194] generating shared ca certs ...
	I0731 20:26:40.884501 1111910 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:40.884690 1111910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:26:40.884748 1111910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:26:40.884767 1111910 certs.go:256] generating profile certs ...
	I0731 20:26:40.884870 1111910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key
	I0731 20:26:40.884903 1111910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.abdbd490
	I0731 20:26:40.884923 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.abdbd490 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.149 192.168.39.254]
	I0731 20:26:40.985889 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.abdbd490 ...
	I0731 20:26:40.985922 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.abdbd490: {Name:mkb205178a896117b37b860bb0c1e6c1f7ceb4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:40.986141 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.abdbd490 ...
	I0731 20:26:40.986162 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.abdbd490: {Name:mk00df486dd33be11c2b466cc37cc360d6e75de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:40.986265 1111910 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.abdbd490 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt
	I0731 20:26:40.986423 1111910 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.abdbd490 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key
	I0731 20:26:40.986611 1111910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key
	I0731 20:26:40.986633 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:26:40.986654 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:26:40.986674 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:26:40.986692 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:26:40.986709 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:26:40.986724 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:26:40.986740 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:26:40.986758 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:26:40.986821 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 20:26:40.986861 1111910 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 20:26:40.986875 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:26:40.986914 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:26:40.986944 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:26:40.986973 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:26:40.987029 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:26:40.987066 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 20:26:40.987086 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:26:40.987105 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 20:26:40.987149 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:26:40.990347 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:40.990727 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:26:40.990752 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:40.990967 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:26:40.991185 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:26:40.991358 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:26:40.991517 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:26:41.064454 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 20:26:41.069124 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 20:26:41.079528 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 20:26:41.083559 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 20:26:41.097602 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 20:26:41.101658 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 20:26:41.111855 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 20:26:41.115804 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 20:26:41.126033 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 20:26:41.129835 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 20:26:41.139268 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 20:26:41.143253 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 20:26:41.153352 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:26:41.176218 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:26:41.197757 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:26:41.221309 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:26:41.244983 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 20:26:41.268650 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:26:41.290463 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:26:41.311653 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:26:41.333376 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 20:26:41.354238 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:26:41.375812 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 20:26:41.396801 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 20:26:41.411292 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 20:26:41.426012 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 20:26:41.440621 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 20:26:41.455272 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 20:26:41.470060 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 20:26:41.484588 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 20:26:41.499352 1111910 ssh_runner.go:195] Run: openssl version
	I0731 20:26:41.504576 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 20:26:41.515341 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 20:26:41.520289 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 20:26:41.520355 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 20:26:41.525767 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:26:41.535441 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:26:41.544920 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:26:41.548814 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:26:41.548880 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:26:41.554115 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:26:41.563602 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 20:26:41.572960 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 20:26:41.576827 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 20:26:41.576879 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 20:26:41.581952 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 20:26:41.591530 1111910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:26:41.595068 1111910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:26:41.595134 1111910 kubeadm.go:934] updating node {m02 192.168.39.149 8443 v1.30.3 crio true true} ...
	I0731 20:26:41.595226 1111910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-430887-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:26:41.595251 1111910 kube-vip.go:115] generating kube-vip config ...
	I0731 20:26:41.595283 1111910 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 20:26:41.609866 1111910 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 20:26:41.609946 1111910 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 20:26:41.610015 1111910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:26:41.618696 1111910 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 20:26:41.618764 1111910 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 20:26:41.627217 1111910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 20:26:41.627246 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 20:26:41.627326 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 20:26:41.627336 1111910 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 20:26:41.627366 1111910 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 20:26:41.631120 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 20:26:41.631149 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 20:26:43.082090 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:26:43.096943 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 20:26:43.097061 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 20:26:43.101147 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 20:26:43.101192 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 20:26:49.210010 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 20:26:49.210111 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 20:26:49.214827 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 20:26:49.214866 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 20:26:49.416229 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 20:26:49.425127 1111910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 20:26:49.440316 1111910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:26:49.455305 1111910 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 20:26:49.470292 1111910 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 20:26:49.473905 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:26:49.485088 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:26:49.598242 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:26:49.613566 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:26:49.613993 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:49.614038 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:49.629539 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0731 20:26:49.630002 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:49.630492 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:49.630521 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:49.630885 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:49.631064 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:26:49.631225 1111910 start.go:317] joinCluster: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:26:49.631361 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 20:26:49.631391 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:26:49.634093 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:49.634470 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:26:49.634499 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:49.634601 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:26:49.634773 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:26:49.634910 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:26:49.635043 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:26:49.772864 1111910 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:26:49.772931 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xqbiox.ds69zqx06io5ro58 --discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-430887-m02 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443"
	I0731 20:27:09.394637 1111910 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xqbiox.ds69zqx06io5ro58 --discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-430887-m02 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443": (19.62167686s)
	I0731 20:27:09.394685 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 20:27:09.949824 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-430887-m02 minikube.k8s.io/updated_at=2024_07_31T20_27_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=ha-430887 minikube.k8s.io/primary=false
	I0731 20:27:10.064162 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-430887-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 20:27:10.191263 1111910 start.go:319] duration metric: took 20.56003215s to joinCluster
	I0731 20:27:10.191365 1111910 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:27:10.191644 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:27:10.192629 1111910 out.go:177] * Verifying Kubernetes components...
	I0731 20:27:10.193900 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:27:10.434173 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:27:10.462180 1111910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:27:10.462499 1111910 kapi.go:59] client config for ha-430887: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt", KeyFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key", CAFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 20:27:10.462567 1111910 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.195:8443
	I0731 20:27:10.462890 1111910 node_ready.go:35] waiting up to 6m0s for node "ha-430887-m02" to be "Ready" ...
	I0731 20:27:10.462994 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:10.463003 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:10.463011 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:10.463014 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:10.487053 1111910 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0731 20:27:10.963184 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:10.963209 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:10.963217 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:10.963221 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:10.969955 1111910 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 20:27:11.464103 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:11.464129 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:11.464139 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:11.464143 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:11.514731 1111910 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I0731 20:27:11.963905 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:11.963935 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:11.963948 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:11.963955 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:11.967079 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:12.464179 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:12.464208 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:12.464219 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:12.464227 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:12.467975 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:12.468614 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:12.963304 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:12.963330 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:12.963339 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:12.963347 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:12.966502 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:13.463181 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:13.463204 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:13.463212 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:13.463216 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:13.466931 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:13.964077 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:13.964117 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:13.964130 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:13.964135 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:13.967076 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:14.464107 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:14.464141 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:14.464152 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:14.464163 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:14.469219 1111910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 20:27:14.469765 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:14.963674 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:14.963700 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:14.963713 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:14.963721 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:14.966415 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:15.463216 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:15.463244 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:15.463252 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:15.463257 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:15.466461 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:15.963816 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:15.963844 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:15.963855 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:15.963862 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:15.967018 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:16.464164 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:16.464188 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:16.464198 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:16.464203 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:16.473514 1111910 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 20:27:16.474125 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:16.963440 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:16.963465 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:16.963474 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:16.963477 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:16.966989 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:17.463138 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:17.463164 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:17.463172 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:17.463176 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:17.466092 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:17.963578 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:17.963612 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:17.963625 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:17.963631 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:17.966690 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:18.463154 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:18.463181 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:18.463192 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:18.463197 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:18.468351 1111910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 20:27:18.963824 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:18.963848 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:18.963856 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:18.963860 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:18.966928 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:18.967729 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:19.463361 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:19.463386 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:19.463395 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:19.463402 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:19.466319 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:19.963623 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:19.963650 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:19.963662 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:19.963674 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:19.966633 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:20.463507 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:20.463531 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:20.463540 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:20.463545 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:20.468084 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:20.964115 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:20.964139 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:20.964147 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:20.964152 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:20.967140 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:21.463895 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:21.463919 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:21.463927 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:21.463931 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:21.466884 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:21.467449 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:21.963986 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:21.964013 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:21.964025 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:21.964033 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:21.967007 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:22.463982 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:22.464008 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:22.464019 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:22.464026 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:22.473217 1111910 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 20:27:22.963730 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:22.963754 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:22.963762 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:22.963768 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:22.968700 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:23.463464 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:23.463494 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:23.463507 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:23.463512 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:23.466573 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:23.963354 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:23.963376 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:23.963386 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:23.963391 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:23.966169 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:23.966925 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:24.463459 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:24.463492 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:24.463503 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:24.463525 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:24.468003 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:24.963296 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:24.963326 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:24.963338 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:24.963343 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:24.966267 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:25.463403 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:25.463428 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:25.463436 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:25.463440 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:25.466336 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:25.963317 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:25.963339 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:25.963348 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:25.963353 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:25.966590 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:25.967161 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:26.464157 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:26.464186 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:26.464199 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:26.464206 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:26.468947 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:26.963510 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:26.963534 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:26.963541 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:26.963545 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:26.966817 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:27.464048 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:27.464073 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.464082 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.464085 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.466712 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.963091 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:27.963114 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.963123 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.963127 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.966751 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:27.967374 1111910 node_ready.go:49] node "ha-430887-m02" has status "Ready":"True"
	I0731 20:27:27.967395 1111910 node_ready.go:38] duration metric: took 17.504481571s for node "ha-430887-m02" to be "Ready" ...
	I0731 20:27:27.967406 1111910 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:27:27.967476 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:27.967487 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.967497 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.967504 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.971987 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:27.978919 1111910 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.979026 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rhlnq
	I0731 20:27:27.979036 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.979045 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.979053 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.981720 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.982478 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:27.982494 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.982501 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.982508 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.985374 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.986090 1111910 pod_ready.go:92] pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:27.986109 1111910 pod_ready.go:81] duration metric: took 7.166365ms for pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.986117 1111910 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.986166 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tkm49
	I0731 20:27:27.986174 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.986181 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.986185 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.988450 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.989279 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:27.989295 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.989306 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.989311 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.991607 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.992035 1111910 pod_ready.go:92] pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:27.992050 1111910 pod_ready.go:81] duration metric: took 5.927492ms for pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.992057 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.992124 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887
	I0731 20:27:27.992133 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.992139 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.992143 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.994648 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.995292 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:27.995308 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.995315 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.995319 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.997349 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.997899 1111910 pod_ready.go:92] pod "etcd-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:27.997916 1111910 pod_ready.go:81] duration metric: took 5.852465ms for pod "etcd-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.997926 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.997969 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887-m02
	I0731 20:27:27.997976 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.997983 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.997987 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.000162 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:28.000811 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:28.000827 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.000834 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.000838 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.002759 1111910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 20:27:28.003316 1111910 pod_ready.go:92] pod "etcd-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:28.003337 1111910 pod_ready.go:81] duration metric: took 5.404252ms for pod "etcd-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.003354 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.163795 1111910 request.go:629] Waited for 160.355999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887
	I0731 20:27:28.163873 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887
	I0731 20:27:28.163882 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.163908 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.163919 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.167277 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:28.363281 1111910 request.go:629] Waited for 195.296847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:28.363384 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:28.363393 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.363401 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.363407 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.366585 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:28.367170 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:28.367195 1111910 pod_ready.go:81] duration metric: took 363.830066ms for pod "kube-apiserver-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.367205 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.563355 1111910 request.go:629] Waited for 196.072187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m02
	I0731 20:27:28.563468 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m02
	I0731 20:27:28.563479 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.563490 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.563501 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.566800 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:28.764150 1111910 request.go:629] Waited for 196.3672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:28.764213 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:28.764218 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.764225 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.764230 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.767319 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:28.767791 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:28.767818 1111910 pod_ready.go:81] duration metric: took 400.603794ms for pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.767841 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.963410 1111910 request.go:629] Waited for 195.465822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887
	I0731 20:27:28.963475 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887
	I0731 20:27:28.963483 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.963494 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.963503 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.966246 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:29.163142 1111910 request.go:629] Waited for 196.318172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:29.163226 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:29.163231 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.163239 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.163243 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.166099 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:29.166628 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:29.166646 1111910 pod_ready.go:81] duration metric: took 398.795222ms for pod "kube-controller-manager-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.166656 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.363744 1111910 request.go:629] Waited for 196.991252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m02
	I0731 20:27:29.363809 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m02
	I0731 20:27:29.363815 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.363827 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.363832 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.366932 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:29.563917 1111910 request.go:629] Waited for 196.383476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:29.564004 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:29.564011 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.564020 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.564023 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.567070 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:29.567552 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:29.567571 1111910 pod_ready.go:81] duration metric: took 400.909526ms for pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.567583 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hsd92" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.764146 1111910 request.go:629] Waited for 196.452227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsd92
	I0731 20:27:29.764225 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsd92
	I0731 20:27:29.764236 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.764248 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.764254 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.767329 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:29.963262 1111910 request.go:629] Waited for 195.292706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:29.963346 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:29.963352 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.963360 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.963367 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.966479 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:29.966999 1111910 pod_ready.go:92] pod "kube-proxy-hsd92" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:29.967026 1111910 pod_ready.go:81] duration metric: took 399.435841ms for pod "kube-proxy-hsd92" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.967039 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m49fz" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.163999 1111910 request.go:629] Waited for 196.881062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m49fz
	I0731 20:27:30.164104 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m49fz
	I0731 20:27:30.164114 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.164122 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.164126 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.167165 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:30.363179 1111910 request.go:629] Waited for 195.295874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:30.363263 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:30.363269 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.363279 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.363286 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.366080 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:30.366646 1111910 pod_ready.go:92] pod "kube-proxy-m49fz" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:30.366670 1111910 pod_ready.go:81] duration metric: took 399.622051ms for pod "kube-proxy-m49fz" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.366679 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.563714 1111910 request.go:629] Waited for 196.9429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887
	I0731 20:27:30.563785 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887
	I0731 20:27:30.563791 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.563799 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.563805 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.566691 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:30.763977 1111910 request.go:629] Waited for 196.357655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:30.764072 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:30.764083 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.764107 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.764113 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.767123 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:30.767647 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:30.767668 1111910 pod_ready.go:81] duration metric: took 400.981891ms for pod "kube-scheduler-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.767682 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.963908 1111910 request.go:629] Waited for 196.144056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m02
	I0731 20:27:30.963989 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m02
	I0731 20:27:30.963994 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.964002 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.964006 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.966914 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:31.164000 1111910 request.go:629] Waited for 196.377786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:31.164076 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:31.164084 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.164105 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.164113 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.167404 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:31.168063 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:31.168103 1111910 pod_ready.go:81] duration metric: took 400.396907ms for pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:31.168120 1111910 pod_ready.go:38] duration metric: took 3.200700036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:27:31.168147 1111910 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:27:31.168221 1111910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:27:31.182225 1111910 api_server.go:72] duration metric: took 20.990809123s to wait for apiserver process to appear ...
	I0731 20:27:31.182252 1111910 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:27:31.182279 1111910 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0731 20:27:31.187695 1111910 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0731 20:27:31.187786 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/version
	I0731 20:27:31.187799 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.187808 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.187817 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.188697 1111910 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 20:27:31.188964 1111910 api_server.go:141] control plane version: v1.30.3
	I0731 20:27:31.188990 1111910 api_server.go:131] duration metric: took 6.730148ms to wait for apiserver health ...
	I0731 20:27:31.189001 1111910 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:27:31.363428 1111910 request.go:629] Waited for 174.34329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:31.363512 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:31.363520 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.363530 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.363534 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.368392 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:31.372457 1111910 system_pods.go:59] 17 kube-system pods found
	I0731 20:27:31.372482 1111910 system_pods.go:61] "coredns-7db6d8ff4d-rhlnq" [3a333762-0e0a-4a9a-bede-b6cf8a2b221c] Running
	I0731 20:27:31.372487 1111910 system_pods.go:61] "coredns-7db6d8ff4d-tkm49" [5c751586-1fd3-4ebc-8d3f-602f3a70c3ac] Running
	I0731 20:27:31.372491 1111910 system_pods.go:61] "etcd-ha-430887" [c1505419-fc9a-442e-99a0-ba065faa840f] Running
	I0731 20:27:31.372496 1111910 system_pods.go:61] "etcd-ha-430887-m02" [51a3c519-0fab-4340-a484-8d382bec8c4f] Running
	I0731 20:27:31.372499 1111910 system_pods.go:61] "kindnet-49h86" [5e5b0c1c-ff0c-422c-9d94-a0142fd2d4d5] Running
	I0731 20:27:31.372502 1111910 system_pods.go:61] "kindnet-xmjzn" [13a3055d-bcf0-472f-b9f6-787e6f4499cb] Running
	I0731 20:27:31.372505 1111910 system_pods.go:61] "kube-apiserver-ha-430887" [602c04df-b310-4bca-8960-8d24c59e2919] Running
	I0731 20:27:31.372508 1111910 system_pods.go:61] "kube-apiserver-ha-430887-m02" [8e0b7edc-d079-4d14-81ee-5b2ab37239c6] Running
	I0731 20:27:31.372511 1111910 system_pods.go:61] "kube-controller-manager-ha-430887" [682793cf-2b76-4483-9926-1733c17c09cc] Running
	I0731 20:27:31.372514 1111910 system_pods.go:61] "kube-controller-manager-ha-430887-m02" [183243c7-be52-4c3d-b41b-cf6eefc1c669] Running
	I0731 20:27:31.372517 1111910 system_pods.go:61] "kube-proxy-hsd92" [9ec64df5-ccc0-4927-87e0-819d66291037] Running
	I0731 20:27:31.372520 1111910 system_pods.go:61] "kube-proxy-m49fz" [6686467c-0177-47b5-a286-cf718c901436] Running
	I0731 20:27:31.372526 1111910 system_pods.go:61] "kube-scheduler-ha-430887" [3c22927a-2760-49ae-9aea-2f09194581c2] Running
	I0731 20:27:31.372532 1111910 system_pods.go:61] "kube-scheduler-ha-430887-m02" [23a00525-1647-44bc-abfa-5e6db2131442] Running
	I0731 20:27:31.372535 1111910 system_pods.go:61] "kube-vip-ha-430887" [516521a0-b217-407d-90ee-917c6cb6991a] Running
	I0731 20:27:31.372537 1111910 system_pods.go:61] "kube-vip-ha-430887-m02" [421d15be-6980-4c04-b2bc-05ed559f2f2e] Running
	I0731 20:27:31.372543 1111910 system_pods.go:61] "storage-provisioner" [1eb16097-a994-4b42-b876-ebe7d6022be6] Running
	I0731 20:27:31.372550 1111910 system_pods.go:74] duration metric: took 183.538397ms to wait for pod list to return data ...
	I0731 20:27:31.372560 1111910 default_sa.go:34] waiting for default service account to be created ...
	I0731 20:27:31.563997 1111910 request.go:629] Waited for 191.354002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0731 20:27:31.564105 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0731 20:27:31.564115 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.564124 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.564132 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.567231 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:31.567521 1111910 default_sa.go:45] found service account: "default"
	I0731 20:27:31.567548 1111910 default_sa.go:55] duration metric: took 194.97748ms for default service account to be created ...
	I0731 20:27:31.567559 1111910 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 20:27:31.764058 1111910 request.go:629] Waited for 196.398195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:31.764136 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:31.764142 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.764150 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.764156 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.768830 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:31.772955 1111910 system_pods.go:86] 17 kube-system pods found
	I0731 20:27:31.772983 1111910 system_pods.go:89] "coredns-7db6d8ff4d-rhlnq" [3a333762-0e0a-4a9a-bede-b6cf8a2b221c] Running
	I0731 20:27:31.772990 1111910 system_pods.go:89] "coredns-7db6d8ff4d-tkm49" [5c751586-1fd3-4ebc-8d3f-602f3a70c3ac] Running
	I0731 20:27:31.772997 1111910 system_pods.go:89] "etcd-ha-430887" [c1505419-fc9a-442e-99a0-ba065faa840f] Running
	I0731 20:27:31.773003 1111910 system_pods.go:89] "etcd-ha-430887-m02" [51a3c519-0fab-4340-a484-8d382bec8c4f] Running
	I0731 20:27:31.773009 1111910 system_pods.go:89] "kindnet-49h86" [5e5b0c1c-ff0c-422c-9d94-a0142fd2d4d5] Running
	I0731 20:27:31.773014 1111910 system_pods.go:89] "kindnet-xmjzn" [13a3055d-bcf0-472f-b9f6-787e6f4499cb] Running
	I0731 20:27:31.773020 1111910 system_pods.go:89] "kube-apiserver-ha-430887" [602c04df-b310-4bca-8960-8d24c59e2919] Running
	I0731 20:27:31.773026 1111910 system_pods.go:89] "kube-apiserver-ha-430887-m02" [8e0b7edc-d079-4d14-81ee-5b2ab37239c6] Running
	I0731 20:27:31.773032 1111910 system_pods.go:89] "kube-controller-manager-ha-430887" [682793cf-2b76-4483-9926-1733c17c09cc] Running
	I0731 20:27:31.773040 1111910 system_pods.go:89] "kube-controller-manager-ha-430887-m02" [183243c7-be52-4c3d-b41b-cf6eefc1c669] Running
	I0731 20:27:31.773050 1111910 system_pods.go:89] "kube-proxy-hsd92" [9ec64df5-ccc0-4927-87e0-819d66291037] Running
	I0731 20:27:31.773060 1111910 system_pods.go:89] "kube-proxy-m49fz" [6686467c-0177-47b5-a286-cf718c901436] Running
	I0731 20:27:31.773068 1111910 system_pods.go:89] "kube-scheduler-ha-430887" [3c22927a-2760-49ae-9aea-2f09194581c2] Running
	I0731 20:27:31.773076 1111910 system_pods.go:89] "kube-scheduler-ha-430887-m02" [23a00525-1647-44bc-abfa-5e6db2131442] Running
	I0731 20:27:31.773085 1111910 system_pods.go:89] "kube-vip-ha-430887" [516521a0-b217-407d-90ee-917c6cb6991a] Running
	I0731 20:27:31.773090 1111910 system_pods.go:89] "kube-vip-ha-430887-m02" [421d15be-6980-4c04-b2bc-05ed559f2f2e] Running
	I0731 20:27:31.773097 1111910 system_pods.go:89] "storage-provisioner" [1eb16097-a994-4b42-b876-ebe7d6022be6] Running
	I0731 20:27:31.773110 1111910 system_pods.go:126] duration metric: took 205.539527ms to wait for k8s-apps to be running ...
	I0731 20:27:31.773123 1111910 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 20:27:31.773181 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:27:31.786395 1111910 system_svc.go:56] duration metric: took 13.263755ms WaitForService to wait for kubelet
	I0731 20:27:31.786425 1111910 kubeadm.go:582] duration metric: took 21.595015678s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:27:31.786447 1111910 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:27:31.963812 1111910 request.go:629] Waited for 177.278545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes
	I0731 20:27:31.963877 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes
	I0731 20:27:31.963882 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.963891 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.963895 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.967186 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:31.968033 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:27:31.968059 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:27:31.968082 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:27:31.968102 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:27:31.968110 1111910 node_conditions.go:105] duration metric: took 181.656598ms to run NodePressure ...
	I0731 20:27:31.968127 1111910 start.go:241] waiting for startup goroutines ...
	I0731 20:27:31.968169 1111910 start.go:255] writing updated cluster config ...
	I0731 20:27:31.970116 1111910 out.go:177] 
	I0731 20:27:31.971456 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:27:31.971557 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:27:31.972967 1111910 out.go:177] * Starting "ha-430887-m03" control-plane node in "ha-430887" cluster
	I0731 20:27:31.974051 1111910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:27:31.974073 1111910 cache.go:56] Caching tarball of preloaded images
	I0731 20:27:31.974199 1111910 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:27:31.974212 1111910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:27:31.974324 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:27:31.974524 1111910 start.go:360] acquireMachinesLock for ha-430887-m03: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:27:31.974580 1111910 start.go:364] duration metric: took 28.878µs to acquireMachinesLock for "ha-430887-m03"
	I0731 20:27:31.974604 1111910 start.go:93] Provisioning new machine with config: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:27:31.974749 1111910 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 20:27:31.976083 1111910 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 20:27:31.976194 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:27:31.976230 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:27:31.991630 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
	I0731 20:27:31.992116 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:27:31.992670 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:27:31.992696 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:27:31.993083 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:27:31.993299 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetMachineName
	I0731 20:27:31.993461 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:31.993660 1111910 start.go:159] libmachine.API.Create for "ha-430887" (driver="kvm2")
	I0731 20:27:31.993691 1111910 client.go:168] LocalClient.Create starting
	I0731 20:27:31.993725 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 20:27:31.993766 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:27:31.993785 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:27:31.993868 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 20:27:31.993896 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:27:31.993913 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:27:31.993938 1111910 main.go:141] libmachine: Running pre-create checks...
	I0731 20:27:31.993948 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .PreCreateCheck
	I0731 20:27:31.994185 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetConfigRaw
	I0731 20:27:31.994626 1111910 main.go:141] libmachine: Creating machine...
	I0731 20:27:31.994641 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .Create
	I0731 20:27:31.994763 1111910 main.go:141] libmachine: (ha-430887-m03) Creating KVM machine...
	I0731 20:27:31.996124 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found existing default KVM network
	I0731 20:27:31.996264 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found existing private KVM network mk-ha-430887
	I0731 20:27:31.996463 1111910 main.go:141] libmachine: (ha-430887-m03) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03 ...
	I0731 20:27:31.996487 1111910 main.go:141] libmachine: (ha-430887-m03) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:27:31.996557 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:31.996455 1112687 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:27:31.996708 1111910 main.go:141] libmachine: (ha-430887-m03) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 20:27:32.286637 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:32.286507 1112687 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa...
	I0731 20:27:32.597988 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:32.597833 1112687 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/ha-430887-m03.rawdisk...
	I0731 20:27:32.598038 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Writing magic tar header
	I0731 20:27:32.598053 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Writing SSH key tar header
	I0731 20:27:32.598069 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:32.597994 1112687 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03 ...
	I0731 20:27:32.598172 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03
	I0731 20:27:32.598197 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03 (perms=drwx------)
	I0731 20:27:32.598204 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 20:27:32.598220 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:27:32.598233 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 20:27:32.598245 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:27:32.598256 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 20:27:32.598265 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 20:27:32.598271 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:27:32.598280 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:27:32.598287 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home
	I0731 20:27:32.598302 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Skipping /home - not owner
	I0731 20:27:32.598314 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:27:32.598331 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:27:32.598344 1111910 main.go:141] libmachine: (ha-430887-m03) Creating domain...
	I0731 20:27:32.599319 1111910 main.go:141] libmachine: (ha-430887-m03) define libvirt domain using xml: 
	I0731 20:27:32.599342 1111910 main.go:141] libmachine: (ha-430887-m03) <domain type='kvm'>
	I0731 20:27:32.599353 1111910 main.go:141] libmachine: (ha-430887-m03)   <name>ha-430887-m03</name>
	I0731 20:27:32.599371 1111910 main.go:141] libmachine: (ha-430887-m03)   <memory unit='MiB'>2200</memory>
	I0731 20:27:32.599384 1111910 main.go:141] libmachine: (ha-430887-m03)   <vcpu>2</vcpu>
	I0731 20:27:32.599395 1111910 main.go:141] libmachine: (ha-430887-m03)   <features>
	I0731 20:27:32.599407 1111910 main.go:141] libmachine: (ha-430887-m03)     <acpi/>
	I0731 20:27:32.599416 1111910 main.go:141] libmachine: (ha-430887-m03)     <apic/>
	I0731 20:27:32.599427 1111910 main.go:141] libmachine: (ha-430887-m03)     <pae/>
	I0731 20:27:32.599438 1111910 main.go:141] libmachine: (ha-430887-m03)     
	I0731 20:27:32.599478 1111910 main.go:141] libmachine: (ha-430887-m03)   </features>
	I0731 20:27:32.599503 1111910 main.go:141] libmachine: (ha-430887-m03)   <cpu mode='host-passthrough'>
	I0731 20:27:32.599516 1111910 main.go:141] libmachine: (ha-430887-m03)   
	I0731 20:27:32.599526 1111910 main.go:141] libmachine: (ha-430887-m03)   </cpu>
	I0731 20:27:32.599535 1111910 main.go:141] libmachine: (ha-430887-m03)   <os>
	I0731 20:27:32.599546 1111910 main.go:141] libmachine: (ha-430887-m03)     <type>hvm</type>
	I0731 20:27:32.599558 1111910 main.go:141] libmachine: (ha-430887-m03)     <boot dev='cdrom'/>
	I0731 20:27:32.599581 1111910 main.go:141] libmachine: (ha-430887-m03)     <boot dev='hd'/>
	I0731 20:27:32.599606 1111910 main.go:141] libmachine: (ha-430887-m03)     <bootmenu enable='no'/>
	I0731 20:27:32.599618 1111910 main.go:141] libmachine: (ha-430887-m03)   </os>
	I0731 20:27:32.599626 1111910 main.go:141] libmachine: (ha-430887-m03)   <devices>
	I0731 20:27:32.599640 1111910 main.go:141] libmachine: (ha-430887-m03)     <disk type='file' device='cdrom'>
	I0731 20:27:32.599654 1111910 main.go:141] libmachine: (ha-430887-m03)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/boot2docker.iso'/>
	I0731 20:27:32.599664 1111910 main.go:141] libmachine: (ha-430887-m03)       <target dev='hdc' bus='scsi'/>
	I0731 20:27:32.599669 1111910 main.go:141] libmachine: (ha-430887-m03)       <readonly/>
	I0731 20:27:32.599699 1111910 main.go:141] libmachine: (ha-430887-m03)     </disk>
	I0731 20:27:32.599719 1111910 main.go:141] libmachine: (ha-430887-m03)     <disk type='file' device='disk'>
	I0731 20:27:32.599736 1111910 main.go:141] libmachine: (ha-430887-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:27:32.599751 1111910 main.go:141] libmachine: (ha-430887-m03)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/ha-430887-m03.rawdisk'/>
	I0731 20:27:32.599765 1111910 main.go:141] libmachine: (ha-430887-m03)       <target dev='hda' bus='virtio'/>
	I0731 20:27:32.599774 1111910 main.go:141] libmachine: (ha-430887-m03)     </disk>
	I0731 20:27:32.599784 1111910 main.go:141] libmachine: (ha-430887-m03)     <interface type='network'>
	I0731 20:27:32.599800 1111910 main.go:141] libmachine: (ha-430887-m03)       <source network='mk-ha-430887'/>
	I0731 20:27:32.599812 1111910 main.go:141] libmachine: (ha-430887-m03)       <model type='virtio'/>
	I0731 20:27:32.599822 1111910 main.go:141] libmachine: (ha-430887-m03)     </interface>
	I0731 20:27:32.599836 1111910 main.go:141] libmachine: (ha-430887-m03)     <interface type='network'>
	I0731 20:27:32.599848 1111910 main.go:141] libmachine: (ha-430887-m03)       <source network='default'/>
	I0731 20:27:32.599861 1111910 main.go:141] libmachine: (ha-430887-m03)       <model type='virtio'/>
	I0731 20:27:32.599875 1111910 main.go:141] libmachine: (ha-430887-m03)     </interface>
	I0731 20:27:32.599887 1111910 main.go:141] libmachine: (ha-430887-m03)     <serial type='pty'>
	I0731 20:27:32.599897 1111910 main.go:141] libmachine: (ha-430887-m03)       <target port='0'/>
	I0731 20:27:32.599907 1111910 main.go:141] libmachine: (ha-430887-m03)     </serial>
	I0731 20:27:32.599918 1111910 main.go:141] libmachine: (ha-430887-m03)     <console type='pty'>
	I0731 20:27:32.599930 1111910 main.go:141] libmachine: (ha-430887-m03)       <target type='serial' port='0'/>
	I0731 20:27:32.599940 1111910 main.go:141] libmachine: (ha-430887-m03)     </console>
	I0731 20:27:32.599949 1111910 main.go:141] libmachine: (ha-430887-m03)     <rng model='virtio'>
	I0731 20:27:32.599963 1111910 main.go:141] libmachine: (ha-430887-m03)       <backend model='random'>/dev/random</backend>
	I0731 20:27:32.599974 1111910 main.go:141] libmachine: (ha-430887-m03)     </rng>
	I0731 20:27:32.599984 1111910 main.go:141] libmachine: (ha-430887-m03)     
	I0731 20:27:32.599992 1111910 main.go:141] libmachine: (ha-430887-m03)     
	I0731 20:27:32.600004 1111910 main.go:141] libmachine: (ha-430887-m03)   </devices>
	I0731 20:27:32.600014 1111910 main.go:141] libmachine: (ha-430887-m03) </domain>
	I0731 20:27:32.600026 1111910 main.go:141] libmachine: (ha-430887-m03) 
	I0731 20:27:32.607824 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:f2:bd:db in network default
	I0731 20:27:32.608459 1111910 main.go:141] libmachine: (ha-430887-m03) Ensuring networks are active...
	I0731 20:27:32.608476 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:32.609170 1111910 main.go:141] libmachine: (ha-430887-m03) Ensuring network default is active
	I0731 20:27:32.609476 1111910 main.go:141] libmachine: (ha-430887-m03) Ensuring network mk-ha-430887 is active
	I0731 20:27:32.609833 1111910 main.go:141] libmachine: (ha-430887-m03) Getting domain xml...
	I0731 20:27:32.610534 1111910 main.go:141] libmachine: (ha-430887-m03) Creating domain...
	I0731 20:27:33.830734 1111910 main.go:141] libmachine: (ha-430887-m03) Waiting to get IP...
	I0731 20:27:33.831662 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:33.832049 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:33.832079 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:33.832025 1112687 retry.go:31] will retry after 254.049554ms: waiting for machine to come up
	I0731 20:27:34.087542 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:34.088027 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:34.088056 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:34.087980 1112687 retry.go:31] will retry after 271.956827ms: waiting for machine to come up
	I0731 20:27:34.361595 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:34.362065 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:34.362097 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:34.362045 1112687 retry.go:31] will retry after 481.093647ms: waiting for machine to come up
	I0731 20:27:34.844678 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:34.845084 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:34.845107 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:34.845047 1112687 retry.go:31] will retry after 553.436017ms: waiting for machine to come up
	I0731 20:27:35.399824 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:35.400216 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:35.400263 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:35.400174 1112687 retry.go:31] will retry after 573.943855ms: waiting for machine to come up
	I0731 20:27:35.976809 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:35.977282 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:35.977311 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:35.977230 1112687 retry.go:31] will retry after 719.564235ms: waiting for machine to come up
	I0731 20:27:36.698107 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:36.698492 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:36.698517 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:36.698463 1112687 retry.go:31] will retry after 843.432167ms: waiting for machine to come up
	I0731 20:27:37.543764 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:37.544288 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:37.544314 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:37.544236 1112687 retry.go:31] will retry after 1.27103611s: waiting for machine to come up
	I0731 20:27:38.817349 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:38.817839 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:38.817865 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:38.817797 1112687 retry.go:31] will retry after 1.569967185s: waiting for machine to come up
	I0731 20:27:40.389169 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:40.389722 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:40.389749 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:40.389681 1112687 retry.go:31] will retry after 2.27233384s: waiting for machine to come up
	I0731 20:27:42.664409 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:42.664907 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:42.664938 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:42.664850 1112687 retry.go:31] will retry after 2.169072633s: waiting for machine to come up
	I0731 20:27:44.837083 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:44.837448 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:44.837472 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:44.837413 1112687 retry.go:31] will retry after 2.737790564s: waiting for machine to come up
	I0731 20:27:47.577033 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:47.577418 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:47.577445 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:47.577369 1112687 retry.go:31] will retry after 3.226247613s: waiting for machine to come up
	I0731 20:27:50.805074 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:50.805502 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:50.805528 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:50.805455 1112687 retry.go:31] will retry after 4.606974131s: waiting for machine to come up
	I0731 20:27:55.416718 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.417104 1111910 main.go:141] libmachine: (ha-430887-m03) Found IP for machine: 192.168.39.44
	I0731 20:27:55.417133 1111910 main.go:141] libmachine: (ha-430887-m03) Reserving static IP address...
	I0731 20:27:55.417146 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has current primary IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.417667 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find host DHCP lease matching {name: "ha-430887-m03", mac: "52:54:00:52:fa:c0", ip: "192.168.39.44"} in network mk-ha-430887
	I0731 20:27:55.492542 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Getting to WaitForSSH function...
	I0731 20:27:55.492580 1111910 main.go:141] libmachine: (ha-430887-m03) Reserved static IP address: 192.168.39.44
	I0731 20:27:55.492594 1111910 main.go:141] libmachine: (ha-430887-m03) Waiting for SSH to be available...
	I0731 20:27:55.495071 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.495489 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.495519 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.495687 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Using SSH client type: external
	I0731 20:27:55.495719 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa (-rw-------)
	I0731 20:27:55.495755 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:27:55.495773 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | About to run SSH command:
	I0731 20:27:55.495790 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | exit 0
	I0731 20:27:55.615853 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 20:27:55.616240 1111910 main.go:141] libmachine: (ha-430887-m03) KVM machine creation complete!
	I0731 20:27:55.616518 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetConfigRaw
	I0731 20:27:55.617069 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:55.617296 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:55.617490 1111910 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:27:55.617518 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:27:55.618707 1111910 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:27:55.618724 1111910 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:27:55.618732 1111910 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:27:55.618740 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:55.620837 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.621225 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.621250 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.621421 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:55.621598 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.621758 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.621880 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:55.622039 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:55.622270 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:55.622281 1111910 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:27:55.719264 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:27:55.719291 1111910 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:27:55.719303 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:55.721845 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.722169 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.722197 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.722350 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:55.722537 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.722704 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.722868 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:55.723055 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:55.723250 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:55.723262 1111910 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:27:55.820540 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:27:55.820634 1111910 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:27:55.820648 1111910 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:27:55.820661 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetMachineName
	I0731 20:27:55.820919 1111910 buildroot.go:166] provisioning hostname "ha-430887-m03"
	I0731 20:27:55.820944 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetMachineName
	I0731 20:27:55.821132 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:55.823922 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.824353 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.824378 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.824570 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:55.824755 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.824938 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.825095 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:55.825278 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:55.825519 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:55.825539 1111910 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430887-m03 && echo "ha-430887-m03" | sudo tee /etc/hostname
	I0731 20:27:55.941292 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887-m03
	
	I0731 20:27:55.941322 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:55.944171 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.944532 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.944579 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.944724 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:55.944953 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.945134 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.945268 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:55.945458 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:55.945626 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:55.945642 1111910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430887-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430887-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430887-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:27:56.052176 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:27:56.052206 1111910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:27:56.052231 1111910 buildroot.go:174] setting up certificates
	I0731 20:27:56.052241 1111910 provision.go:84] configureAuth start
	I0731 20:27:56.052252 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetMachineName
	I0731 20:27:56.052539 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:27:56.055307 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.055713 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.055742 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.055895 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.058168 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.058509 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.058539 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.058666 1111910 provision.go:143] copyHostCerts
	I0731 20:27:56.058702 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:27:56.058739 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:27:56.058749 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:27:56.058835 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:27:56.058911 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:27:56.058929 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:27:56.058937 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:27:56.058960 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:27:56.059054 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:27:56.059080 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:27:56.059088 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:27:56.059128 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:27:56.059202 1111910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.ha-430887-m03 san=[127.0.0.1 192.168.39.44 ha-430887-m03 localhost minikube]
	I0731 20:27:56.128693 1111910 provision.go:177] copyRemoteCerts
	I0731 20:27:56.128774 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:27:56.128811 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.131590 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.132002 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.132027 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.132235 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.132386 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.132497 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.132600 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:27:56.210027 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:27:56.210117 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:27:56.234161 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:27:56.234258 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 20:27:56.257807 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:27:56.257898 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:27:56.279968 1111910 provision.go:87] duration metric: took 227.707935ms to configureAuth
	I0731 20:27:56.280007 1111910 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:27:56.280328 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:27:56.280442 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.283580 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.284020 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.284052 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.284280 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.284547 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.284743 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.284916 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.285191 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:56.285378 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:56.285399 1111910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:27:56.530667 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:27:56.530701 1111910 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:27:56.530712 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetURL
	I0731 20:27:56.532410 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Using libvirt version 6000000
	I0731 20:27:56.535092 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.535502 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.535536 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.535713 1111910 main.go:141] libmachine: Docker is up and running!
	I0731 20:27:56.535725 1111910 main.go:141] libmachine: Reticulating splines...
	I0731 20:27:56.535731 1111910 client.go:171] duration metric: took 24.542033072s to LocalClient.Create
	I0731 20:27:56.535758 1111910 start.go:167] duration metric: took 24.542097631s to libmachine.API.Create "ha-430887"
	I0731 20:27:56.535771 1111910 start.go:293] postStartSetup for "ha-430887-m03" (driver="kvm2")
	I0731 20:27:56.535785 1111910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:27:56.535810 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.536131 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:27:56.536159 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.538554 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.538957 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.538990 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.539199 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.539379 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.539519 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.539645 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:27:56.618288 1111910 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:27:56.622369 1111910 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:27:56.622403 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:27:56.622470 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:27:56.622557 1111910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 20:27:56.622575 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 20:27:56.622696 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:27:56.631574 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:27:56.656215 1111910 start.go:296] duration metric: took 120.426549ms for postStartSetup
	I0731 20:27:56.656287 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetConfigRaw
	I0731 20:27:56.656987 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:27:56.659613 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.660171 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.660202 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.660490 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:27:56.660690 1111910 start.go:128] duration metric: took 24.685929924s to createHost
	I0731 20:27:56.660718 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.663033 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.663416 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.663445 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.663595 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.663818 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.664005 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.664154 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.664307 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:56.664511 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:56.664522 1111910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:27:56.764455 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722457676.741598635
	
	I0731 20:27:56.764478 1111910 fix.go:216] guest clock: 1722457676.741598635
	I0731 20:27:56.764498 1111910 fix.go:229] Guest: 2024-07-31 20:27:56.741598635 +0000 UTC Remote: 2024-07-31 20:27:56.660703552 +0000 UTC m=+157.784857276 (delta=80.895083ms)
	I0731 20:27:56.764521 1111910 fix.go:200] guest clock delta is within tolerance: 80.895083ms
	I0731 20:27:56.764528 1111910 start.go:83] releasing machines lock for "ha-430887-m03", held for 24.789935728s
	I0731 20:27:56.764554 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.764861 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:27:56.767477 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.767875 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.767906 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.770121 1111910 out.go:177] * Found network options:
	I0731 20:27:56.771478 1111910 out.go:177]   - NO_PROXY=192.168.39.195,192.168.39.149
	W0731 20:27:56.772541 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 20:27:56.772577 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 20:27:56.772597 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.773107 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.773299 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.773408 1111910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:27:56.773445 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	W0731 20:27:56.773537 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 20:27:56.773561 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 20:27:56.773616 1111910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:27:56.773634 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.776404 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.776473 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.776815 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.776838 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.776867 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.776887 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.776981 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.777091 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.777178 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.777354 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.777368 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.777542 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:27:56.777561 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.777708 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:27:57.006577 1111910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:27:57.012469 1111910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:27:57.012545 1111910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:27:57.028264 1111910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:27:57.028289 1111910 start.go:495] detecting cgroup driver to use...
	I0731 20:27:57.028367 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:27:57.043635 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:27:57.056608 1111910 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:27:57.056683 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:27:57.069906 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:27:57.082502 1111910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:27:57.197561 1111910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:27:57.331946 1111910 docker.go:233] disabling docker service ...
	I0731 20:27:57.332028 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:27:57.346408 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:27:57.358495 1111910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:27:57.500031 1111910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:27:57.620301 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:27:57.633465 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:27:57.650241 1111910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:27:57.650304 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.660892 1111910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:27:57.660999 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.670938 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.681757 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.691993 1111910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:27:57.702682 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.713001 1111910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.729298 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.740962 1111910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:27:57.749983 1111910 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:27:57.750050 1111910 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:27:57.761442 1111910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:27:57.770383 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:27:57.901256 1111910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:27:58.031043 1111910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:27:58.031132 1111910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:27:58.036217 1111910 start.go:563] Will wait 60s for crictl version
	I0731 20:27:58.036297 1111910 ssh_runner.go:195] Run: which crictl
	I0731 20:27:58.039857 1111910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:27:58.073700 1111910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:27:58.073791 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:27:58.101707 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:27:58.132748 1111910 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:27:58.133969 1111910 out.go:177]   - env NO_PROXY=192.168.39.195
	I0731 20:27:58.135216 1111910 out.go:177]   - env NO_PROXY=192.168.39.195,192.168.39.149
	I0731 20:27:58.136283 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:27:58.139221 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:58.139646 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:58.139674 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:58.139919 1111910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:27:58.143957 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:27:58.155511 1111910 mustload.go:65] Loading cluster: ha-430887
	I0731 20:27:58.155771 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:27:58.156070 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:27:58.156132 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:27:58.170988 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0731 20:27:58.171503 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:27:58.171986 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:27:58.172008 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:27:58.172351 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:27:58.172565 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:27:58.174227 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:27:58.174543 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:27:58.174589 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:27:58.190061 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43215
	I0731 20:27:58.190699 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:27:58.191291 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:27:58.191320 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:27:58.191701 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:27:58.191891 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:27:58.192069 1111910 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887 for IP: 192.168.39.44
	I0731 20:27:58.192083 1111910 certs.go:194] generating shared ca certs ...
	I0731 20:27:58.192120 1111910 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:27:58.192284 1111910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:27:58.192341 1111910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:27:58.192357 1111910 certs.go:256] generating profile certs ...
	I0731 20:27:58.192454 1111910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key
	I0731 20:27:58.192483 1111910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.f307f416
	I0731 20:27:58.192504 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.f307f416 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.149 192.168.39.44 192.168.39.254]
	I0731 20:27:58.349602 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.f307f416 ...
	I0731 20:27:58.349639 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.f307f416: {Name:mk04931c2e9aad5b0d132e036e10941af8973c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:27:58.349824 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.f307f416 ...
	I0731 20:27:58.349839 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.f307f416: {Name:mka66a23f9bd02ebe6126d22c4955d8613c8bef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:27:58.349908 1111910 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.f307f416 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt
	I0731 20:27:58.350027 1111910 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.f307f416 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key
	I0731 20:27:58.350163 1111910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key
	I0731 20:27:58.350180 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:27:58.350193 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:27:58.350206 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:27:58.350219 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:27:58.350231 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:27:58.350244 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:27:58.350256 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:27:58.350268 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:27:58.350318 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 20:27:58.350347 1111910 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 20:27:58.350357 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:27:58.350380 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:27:58.350401 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:27:58.350424 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:27:58.350467 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:27:58.350493 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 20:27:58.350509 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:27:58.350524 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 20:27:58.350575 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:27:58.353482 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:27:58.353869 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:27:58.353897 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:27:58.354119 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:27:58.354351 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:27:58.354518 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:27:58.354661 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:27:58.428500 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 20:27:58.433376 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 20:27:58.448961 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 20:27:58.453144 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 20:27:58.467092 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 20:27:58.473427 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 20:27:58.482991 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 20:27:58.487189 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 20:27:58.497232 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 20:27:58.501529 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 20:27:58.511687 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 20:27:58.515560 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 20:27:58.525780 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:27:58.549881 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:27:58.575035 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:27:58.599186 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:27:58.622975 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 20:27:58.645305 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:27:58.668151 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:27:58.690077 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:27:58.712675 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 20:27:58.735426 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:27:58.758296 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 20:27:58.780077 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 20:27:58.795057 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 20:27:58.810126 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 20:27:58.824954 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 20:27:58.840504 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 20:27:58.856141 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 20:27:58.871975 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 20:27:58.887214 1111910 ssh_runner.go:195] Run: openssl version
	I0731 20:27:58.892770 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 20:27:58.902504 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 20:27:58.906561 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 20:27:58.906612 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 20:27:58.911994 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:27:58.921848 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:27:58.931592 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:27:58.935659 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:27:58.935724 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:27:58.941067 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:27:58.952263 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 20:27:58.961749 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 20:27:58.965702 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 20:27:58.965752 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 20:27:58.971258 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 20:27:58.981081 1111910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:27:58.984988 1111910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:27:58.985040 1111910 kubeadm.go:934] updating node {m03 192.168.39.44 8443 v1.30.3 crio true true} ...
	I0731 20:27:58.985146 1111910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-430887-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:27:58.985185 1111910 kube-vip.go:115] generating kube-vip config ...
	I0731 20:27:58.985229 1111910 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 20:27:58.999089 1111910 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 20:27:58.999171 1111910 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 20:27:58.999232 1111910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:27:59.008764 1111910 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 20:27:59.008845 1111910 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 20:27:59.018072 1111910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 20:27:59.018093 1111910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 20:27:59.018101 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 20:27:59.018147 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:27:59.018163 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 20:27:59.018076 1111910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 20:27:59.018198 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 20:27:59.018278 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 20:27:59.032183 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 20:27:59.032233 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 20:27:59.032262 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 20:27:59.032264 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 20:27:59.032309 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 20:27:59.032340 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 20:27:59.056582 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 20:27:59.056618 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 20:27:59.900501 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 20:27:59.910098 1111910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 20:27:59.926467 1111910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:27:59.941870 1111910 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 20:27:59.958754 1111910 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 20:27:59.962670 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:27:59.975556 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:28:00.099632 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:28:00.127831 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:28:00.128284 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:28:00.128335 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:28:00.144945 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0731 20:28:00.145386 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:28:00.145978 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:28:00.146008 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:28:00.146482 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:28:00.146721 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:28:00.146914 1111910 start.go:317] joinCluster: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:28:00.147064 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 20:28:00.147082 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:28:00.149943 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:28:00.150296 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:28:00.150323 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:28:00.150515 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:28:00.150687 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:28:00.150828 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:28:00.150992 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:28:00.313550 1111910 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:28:00.313622 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ctfq0t.spxvt2sjrrhnv26x --discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-430887-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443"
	I0731 20:28:21.293546 1111910 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ctfq0t.spxvt2sjrrhnv26x --discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-430887-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443": (20.979881723s)
	I0731 20:28:21.293590 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 20:28:21.865744 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-430887-m03 minikube.k8s.io/updated_at=2024_07_31T20_28_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=ha-430887 minikube.k8s.io/primary=false
	I0731 20:28:21.977275 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-430887-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 20:28:22.077506 1111910 start.go:319] duration metric: took 21.93058496s to joinCluster
	I0731 20:28:22.077606 1111910 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:28:22.077966 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:28:22.079109 1111910 out.go:177] * Verifying Kubernetes components...
	I0731 20:28:22.080547 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:28:22.327292 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:28:22.357460 1111910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:28:22.357851 1111910 kapi.go:59] client config for ha-430887: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt", KeyFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key", CAFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 20:28:22.357922 1111910 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.195:8443
	I0731 20:28:22.358185 1111910 node_ready.go:35] waiting up to 6m0s for node "ha-430887-m03" to be "Ready" ...
	I0731 20:28:22.358270 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:22.358278 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:22.358285 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:22.358289 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:22.361629 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:22.859113 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:22.859140 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:22.859151 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:22.859155 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:22.862632 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:23.358947 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:23.358975 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:23.358988 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:23.358996 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:23.369478 1111910 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 20:28:23.858780 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:23.858805 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:23.858814 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:23.858824 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:23.862357 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:24.359056 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:24.359081 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:24.359091 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:24.359095 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:24.362149 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:24.362753 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:24.858482 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:24.858508 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:24.858517 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:24.858521 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:24.862014 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:25.358524 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:25.358549 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:25.358559 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:25.358564 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:25.362085 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:25.859049 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:25.859078 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:25.859089 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:25.859098 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:25.862718 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:26.358553 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:26.358591 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:26.358603 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:26.358611 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:26.362273 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:26.362829 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:26.858778 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:26.858816 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:26.858825 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:26.858829 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:26.862054 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:27.359228 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:27.359255 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:27.359267 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:27.359273 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:27.363032 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:27.859079 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:27.859105 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:27.859115 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:27.859119 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:27.862239 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:28.359250 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:28.359273 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:28.359281 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:28.359287 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:28.362369 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:28.363031 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:28.859429 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:28.859453 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:28.859461 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:28.859465 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:28.862564 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:29.358482 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:29.358505 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:29.358514 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:29.358519 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:29.362340 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:29.858927 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:29.858957 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:29.858968 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:29.858973 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:29.863066 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:28:30.358362 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:30.358386 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:30.358394 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:30.358399 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:30.361551 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:30.859207 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:30.859233 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:30.859245 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:30.859249 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:30.862848 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:30.863830 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:31.359206 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:31.359230 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:31.359238 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:31.359241 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:31.362446 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:31.858555 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:31.858580 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:31.858588 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:31.858592 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:31.861681 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:32.359086 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:32.359117 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:32.359130 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:32.359136 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:32.362580 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:32.858962 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:32.858987 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:32.858996 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:32.858999 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:32.862340 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:33.359108 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:33.359131 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:33.359139 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:33.359144 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:33.362276 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:33.362878 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:33.859249 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:33.859274 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:33.859283 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:33.859287 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:33.862484 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:34.358822 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:34.358850 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:34.358861 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:34.358866 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:34.361891 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:34.859317 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:34.859341 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:34.859354 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:34.859359 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:34.862595 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:35.358965 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:35.358990 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:35.359000 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:35.359006 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:35.362626 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:35.363173 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:35.858529 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:35.858553 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:35.858563 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:35.858566 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:35.861484 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:36.359392 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:36.359418 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:36.359429 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:36.359439 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:36.362733 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:36.858596 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:36.858630 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:36.858647 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:36.858651 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:36.861987 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:37.358480 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:37.358504 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:37.358513 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:37.358516 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:37.361258 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:37.858784 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:37.858807 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:37.858815 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:37.858820 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:37.861857 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:37.862477 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:38.358761 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:38.358790 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:38.358806 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:38.358811 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:38.361998 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:38.858750 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:38.858771 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:38.858780 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:38.858785 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:38.861825 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:39.358453 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:39.358477 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.358485 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.358489 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.361356 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.361973 1111910 node_ready.go:49] node "ha-430887-m03" has status "Ready":"True"
	I0731 20:28:39.361993 1111910 node_ready.go:38] duration metric: took 17.003792582s for node "ha-430887-m03" to be "Ready" ...
	I0731 20:28:39.362002 1111910 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:28:39.362058 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:39.362070 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.362077 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.362082 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.367821 1111910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 20:28:39.373864 1111910 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.373959 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rhlnq
	I0731 20:28:39.373969 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.373979 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.373985 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.376589 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.377432 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:39.377446 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.377456 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.377462 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.379891 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.380404 1111910 pod_ready.go:92] pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.380420 1111910 pod_ready.go:81] duration metric: took 6.531856ms for pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.380430 1111910 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.380479 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tkm49
	I0731 20:28:39.380488 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.380497 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.380503 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.383100 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.383946 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:39.383957 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.383965 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.383970 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.386466 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.387082 1111910 pod_ready.go:92] pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.387097 1111910 pod_ready.go:81] duration metric: took 6.65916ms for pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.387107 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.387157 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887
	I0731 20:28:39.387166 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.387176 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.387183 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.389871 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.390545 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:39.390559 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.390569 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.390573 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.392729 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.393259 1111910 pod_ready.go:92] pod "etcd-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.393278 1111910 pod_ready.go:81] duration metric: took 6.163758ms for pod "etcd-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.393286 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.393328 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887-m02
	I0731 20:28:39.393335 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.393342 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.393346 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.395308 1111910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 20:28:39.395912 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:39.395928 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.395937 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.395945 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.398209 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.398642 1111910 pod_ready.go:92] pod "etcd-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.398658 1111910 pod_ready.go:81] duration metric: took 5.366532ms for pod "etcd-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.398664 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.559071 1111910 request.go:629] Waited for 160.328596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887-m03
	I0731 20:28:39.559141 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887-m03
	I0731 20:28:39.559153 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.559165 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.559174 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.566687 1111910 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 20:28:39.758667 1111910 request.go:629] Waited for 191.268528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:39.758747 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:39.758755 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.758762 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.758766 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.761660 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.762098 1111910 pod_ready.go:92] pod "etcd-ha-430887-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.762117 1111910 pod_ready.go:81] duration metric: took 363.447423ms for pod "etcd-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.762136 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.959161 1111910 request.go:629] Waited for 196.944378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887
	I0731 20:28:39.959257 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887
	I0731 20:28:39.959269 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.959280 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.959287 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.962156 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:40.159092 1111910 request.go:629] Waited for 196.180513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:40.159158 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:40.159165 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.159184 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.159193 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.161538 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:40.162046 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:40.162067 1111910 pod_ready.go:81] duration metric: took 399.922435ms for pod "kube-apiserver-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.162076 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.359140 1111910 request.go:629] Waited for 196.970446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m02
	I0731 20:28:40.359207 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m02
	I0731 20:28:40.359212 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.359220 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.359224 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.362223 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:40.559445 1111910 request.go:629] Waited for 196.359268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:40.559517 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:40.559522 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.559530 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.559534 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.562189 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:40.562886 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:40.562904 1111910 pod_ready.go:81] duration metric: took 400.82189ms for pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.562914 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.759081 1111910 request.go:629] Waited for 196.073598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m03
	I0731 20:28:40.759149 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m03
	I0731 20:28:40.759154 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.759162 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.759166 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.762227 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:40.958936 1111910 request.go:629] Waited for 195.897311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:40.958998 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:40.959003 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.959010 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.959014 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.963309 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:28:40.963874 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:40.963897 1111910 pod_ready.go:81] duration metric: took 400.97635ms for pod "kube-apiserver-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.963911 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.159025 1111910 request.go:629] Waited for 195.001102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887
	I0731 20:28:41.159112 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887
	I0731 20:28:41.159123 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.159136 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.159145 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.162347 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:41.359414 1111910 request.go:629] Waited for 196.355123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:41.359478 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:41.359483 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.359491 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.359497 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.362550 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:41.363184 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:41.363210 1111910 pod_ready.go:81] duration metric: took 399.290567ms for pod "kube-controller-manager-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.363224 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.559165 1111910 request.go:629] Waited for 195.855971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m02
	I0731 20:28:41.559242 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m02
	I0731 20:28:41.559249 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.559280 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.559290 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.562726 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:41.758604 1111910 request.go:629] Waited for 195.285279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:41.758662 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:41.758667 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.758674 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.758680 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.761946 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:41.762411 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:41.762430 1111910 pod_ready.go:81] duration metric: took 399.194377ms for pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.762442 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.958503 1111910 request.go:629] Waited for 195.955663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m03
	I0731 20:28:41.958579 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m03
	I0731 20:28:41.958593 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.958605 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.958610 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.961769 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:42.158872 1111910 request.go:629] Waited for 196.233217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:42.158983 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:42.158998 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.159009 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.159016 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.162046 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:42.162544 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:42.162565 1111910 pod_ready.go:81] duration metric: took 400.114992ms for pod "kube-controller-manager-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.162576 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4mft2" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.358555 1111910 request.go:629] Waited for 195.898783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4mft2
	I0731 20:28:42.358658 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4mft2
	I0731 20:28:42.358666 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.358677 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.358687 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.361987 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:42.559084 1111910 request.go:629] Waited for 196.369786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:42.559156 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:42.559163 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.559177 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.559187 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.562430 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:42.563106 1111910 pod_ready.go:92] pod "kube-proxy-4mft2" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:42.563127 1111910 pod_ready.go:81] duration metric: took 400.544805ms for pod "kube-proxy-4mft2" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.563136 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hsd92" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.758761 1111910 request.go:629] Waited for 195.545293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsd92
	I0731 20:28:42.758848 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsd92
	I0731 20:28:42.758857 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.758865 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.758869 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.761796 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:42.958762 1111910 request.go:629] Waited for 196.362949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:42.958826 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:42.958833 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.958843 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.958849 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.961643 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:42.962375 1111910 pod_ready.go:92] pod "kube-proxy-hsd92" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:42.962393 1111910 pod_ready.go:81] duration metric: took 399.250667ms for pod "kube-proxy-hsd92" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.962402 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m49fz" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.159491 1111910 request.go:629] Waited for 197.008184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m49fz
	I0731 20:28:43.159552 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m49fz
	I0731 20:28:43.159558 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.159570 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.159576 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.162318 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:43.358756 1111910 request.go:629] Waited for 195.744589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:43.358829 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:43.358836 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.358846 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.358864 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.361790 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:43.362386 1111910 pod_ready.go:92] pod "kube-proxy-m49fz" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:43.362405 1111910 pod_ready.go:81] duration metric: took 399.995104ms for pod "kube-proxy-m49fz" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.362416 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.558456 1111910 request.go:629] Waited for 195.959944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887
	I0731 20:28:43.558535 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887
	I0731 20:28:43.558540 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.558548 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.558555 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.561185 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:43.759081 1111910 request.go:629] Waited for 197.361763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:43.759162 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:43.759170 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.759179 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.759187 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.762461 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:43.763045 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:43.763066 1111910 pod_ready.go:81] duration metric: took 400.638758ms for pod "kube-scheduler-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.763075 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.959299 1111910 request.go:629] Waited for 196.129029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m02
	I0731 20:28:43.959381 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m02
	I0731 20:28:43.959392 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.959403 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.959416 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.962385 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:44.158486 1111910 request.go:629] Waited for 195.365632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:44.158681 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:44.158704 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.158713 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.158719 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.161674 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:44.162410 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:44.162427 1111910 pod_ready.go:81] duration metric: took 399.345789ms for pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:44.162436 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:44.358504 1111910 request.go:629] Waited for 196.003111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m03
	I0731 20:28:44.358592 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m03
	I0731 20:28:44.358599 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.358607 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.358614 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.361421 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:44.559427 1111910 request.go:629] Waited for 197.353126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:44.559489 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:44.559494 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.559501 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.559505 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.563330 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:44.563981 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:44.564001 1111910 pod_ready.go:81] duration metric: took 401.558982ms for pod "kube-scheduler-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:44.564013 1111910 pod_ready.go:38] duration metric: took 5.202000853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:28:44.564046 1111910 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:28:44.564138 1111910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:28:44.577774 1111910 api_server.go:72] duration metric: took 22.500125287s to wait for apiserver process to appear ...
	I0731 20:28:44.577801 1111910 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:28:44.577826 1111910 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0731 20:28:44.582020 1111910 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0731 20:28:44.582104 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/version
	I0731 20:28:44.582114 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.582122 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.582128 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.583002 1111910 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 20:28:44.583078 1111910 api_server.go:141] control plane version: v1.30.3
	I0731 20:28:44.583095 1111910 api_server.go:131] duration metric: took 5.287222ms to wait for apiserver health ...
	I0731 20:28:44.583102 1111910 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:28:44.758754 1111910 request.go:629] Waited for 175.571394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:44.758842 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:44.758850 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.758858 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.758864 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.765473 1111910 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 20:28:44.772327 1111910 system_pods.go:59] 24 kube-system pods found
	I0731 20:28:44.772366 1111910 system_pods.go:61] "coredns-7db6d8ff4d-rhlnq" [3a333762-0e0a-4a9a-bede-b6cf8a2b221c] Running
	I0731 20:28:44.772372 1111910 system_pods.go:61] "coredns-7db6d8ff4d-tkm49" [5c751586-1fd3-4ebc-8d3f-602f3a70c3ac] Running
	I0731 20:28:44.772377 1111910 system_pods.go:61] "etcd-ha-430887" [c1505419-fc9a-442e-99a0-ba065faa840f] Running
	I0731 20:28:44.772382 1111910 system_pods.go:61] "etcd-ha-430887-m02" [51a3c519-0fab-4340-a484-8d382bec8c4f] Running
	I0731 20:28:44.772389 1111910 system_pods.go:61] "etcd-ha-430887-m03" [6d37da19-a94f-4068-9dd2-580c67d223d5] Running
	I0731 20:28:44.772394 1111910 system_pods.go:61] "kindnet-49h86" [5e5b0c1c-ff0c-422c-9d94-a0142fd2d4d5] Running
	I0731 20:28:44.772399 1111910 system_pods.go:61] "kindnet-fbt5h" [42db9e05-a780-4945-a413-98fa5832c8d7] Running
	I0731 20:28:44.772404 1111910 system_pods.go:61] "kindnet-xmjzn" [13a3055d-bcf0-472f-b9f6-787e6f4499cb] Running
	I0731 20:28:44.772409 1111910 system_pods.go:61] "kube-apiserver-ha-430887" [602c04df-b310-4bca-8960-8d24c59e2919] Running
	I0731 20:28:44.772414 1111910 system_pods.go:61] "kube-apiserver-ha-430887-m02" [8e0b7edc-d079-4d14-81ee-5b2ab37239c6] Running
	I0731 20:28:44.772420 1111910 system_pods.go:61] "kube-apiserver-ha-430887-m03" [7f79c842-b83a-4eae-96c2-b6defb36ed65] Running
	I0731 20:28:44.772433 1111910 system_pods.go:61] "kube-controller-manager-ha-430887" [682793cf-2b76-4483-9926-1733c17c09cc] Running
	I0731 20:28:44.772438 1111910 system_pods.go:61] "kube-controller-manager-ha-430887-m02" [183243c7-be52-4c3d-b41b-cf6eefc1c669] Running
	I0731 20:28:44.772447 1111910 system_pods.go:61] "kube-controller-manager-ha-430887-m03" [69f7ba2e-3b34-4797-b09e-05e82d37f656] Running
	I0731 20:28:44.772452 1111910 system_pods.go:61] "kube-proxy-4mft2" [71207460-fab2-4bf0-bfa6-180878539386] Running
	I0731 20:28:44.772455 1111910 system_pods.go:61] "kube-proxy-hsd92" [9ec64df5-ccc0-4927-87e0-819d66291037] Running
	I0731 20:28:44.772459 1111910 system_pods.go:61] "kube-proxy-m49fz" [6686467c-0177-47b5-a286-cf718c901436] Running
	I0731 20:28:44.772463 1111910 system_pods.go:61] "kube-scheduler-ha-430887" [3c22927a-2760-49ae-9aea-2f09194581c2] Running
	I0731 20:28:44.772467 1111910 system_pods.go:61] "kube-scheduler-ha-430887-m02" [23a00525-1647-44bc-abfa-5e6db2131442] Running
	I0731 20:28:44.772473 1111910 system_pods.go:61] "kube-scheduler-ha-430887-m03" [082e5224-ffd5-4ecb-a103-7a1901f29709] Running
	I0731 20:28:44.772476 1111910 system_pods.go:61] "kube-vip-ha-430887" [516521a0-b217-407d-90ee-917c6cb6991a] Running
	I0731 20:28:44.772480 1111910 system_pods.go:61] "kube-vip-ha-430887-m02" [421d15be-6980-4c04-b2bc-05ed559f2f2e] Running
	I0731 20:28:44.772486 1111910 system_pods.go:61] "kube-vip-ha-430887-m03" [53aeb41f-2430-4e51-9563-1878009bad9b] Running
	I0731 20:28:44.772491 1111910 system_pods.go:61] "storage-provisioner" [1eb16097-a994-4b42-b876-ebe7d6022be6] Running
	I0731 20:28:44.772497 1111910 system_pods.go:74] duration metric: took 189.381772ms to wait for pod list to return data ...
	I0731 20:28:44.772507 1111910 default_sa.go:34] waiting for default service account to be created ...
	I0731 20:28:44.958956 1111910 request.go:629] Waited for 186.368147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0731 20:28:44.959020 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0731 20:28:44.959026 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.959034 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.959038 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.961922 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:44.962056 1111910 default_sa.go:45] found service account: "default"
	I0731 20:28:44.962071 1111910 default_sa.go:55] duration metric: took 189.558174ms for default service account to be created ...
	I0731 20:28:44.962079 1111910 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 20:28:45.158659 1111910 request.go:629] Waited for 196.4985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:45.158722 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:45.158730 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:45.158737 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:45.158741 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:45.164683 1111910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 20:28:45.172082 1111910 system_pods.go:86] 24 kube-system pods found
	I0731 20:28:45.172126 1111910 system_pods.go:89] "coredns-7db6d8ff4d-rhlnq" [3a333762-0e0a-4a9a-bede-b6cf8a2b221c] Running
	I0731 20:28:45.172135 1111910 system_pods.go:89] "coredns-7db6d8ff4d-tkm49" [5c751586-1fd3-4ebc-8d3f-602f3a70c3ac] Running
	I0731 20:28:45.172141 1111910 system_pods.go:89] "etcd-ha-430887" [c1505419-fc9a-442e-99a0-ba065faa840f] Running
	I0731 20:28:45.172148 1111910 system_pods.go:89] "etcd-ha-430887-m02" [51a3c519-0fab-4340-a484-8d382bec8c4f] Running
	I0731 20:28:45.172154 1111910 system_pods.go:89] "etcd-ha-430887-m03" [6d37da19-a94f-4068-9dd2-580c67d223d5] Running
	I0731 20:28:45.172160 1111910 system_pods.go:89] "kindnet-49h86" [5e5b0c1c-ff0c-422c-9d94-a0142fd2d4d5] Running
	I0731 20:28:45.172171 1111910 system_pods.go:89] "kindnet-fbt5h" [42db9e05-a780-4945-a413-98fa5832c8d7] Running
	I0731 20:28:45.172177 1111910 system_pods.go:89] "kindnet-xmjzn" [13a3055d-bcf0-472f-b9f6-787e6f4499cb] Running
	I0731 20:28:45.172187 1111910 system_pods.go:89] "kube-apiserver-ha-430887" [602c04df-b310-4bca-8960-8d24c59e2919] Running
	I0731 20:28:45.172194 1111910 system_pods.go:89] "kube-apiserver-ha-430887-m02" [8e0b7edc-d079-4d14-81ee-5b2ab37239c6] Running
	I0731 20:28:45.172202 1111910 system_pods.go:89] "kube-apiserver-ha-430887-m03" [7f79c842-b83a-4eae-96c2-b6defb36ed65] Running
	I0731 20:28:45.172209 1111910 system_pods.go:89] "kube-controller-manager-ha-430887" [682793cf-2b76-4483-9926-1733c17c09cc] Running
	I0731 20:28:45.172221 1111910 system_pods.go:89] "kube-controller-manager-ha-430887-m02" [183243c7-be52-4c3d-b41b-cf6eefc1c669] Running
	I0731 20:28:45.172226 1111910 system_pods.go:89] "kube-controller-manager-ha-430887-m03" [69f7ba2e-3b34-4797-b09e-05e82d37f656] Running
	I0731 20:28:45.172230 1111910 system_pods.go:89] "kube-proxy-4mft2" [71207460-fab2-4bf0-bfa6-180878539386] Running
	I0731 20:28:45.172234 1111910 system_pods.go:89] "kube-proxy-hsd92" [9ec64df5-ccc0-4927-87e0-819d66291037] Running
	I0731 20:28:45.172238 1111910 system_pods.go:89] "kube-proxy-m49fz" [6686467c-0177-47b5-a286-cf718c901436] Running
	I0731 20:28:45.172245 1111910 system_pods.go:89] "kube-scheduler-ha-430887" [3c22927a-2760-49ae-9aea-2f09194581c2] Running
	I0731 20:28:45.172251 1111910 system_pods.go:89] "kube-scheduler-ha-430887-m02" [23a00525-1647-44bc-abfa-5e6db2131442] Running
	I0731 20:28:45.172256 1111910 system_pods.go:89] "kube-scheduler-ha-430887-m03" [082e5224-ffd5-4ecb-a103-7a1901f29709] Running
	I0731 20:28:45.172262 1111910 system_pods.go:89] "kube-vip-ha-430887" [516521a0-b217-407d-90ee-917c6cb6991a] Running
	I0731 20:28:45.172267 1111910 system_pods.go:89] "kube-vip-ha-430887-m02" [421d15be-6980-4c04-b2bc-05ed559f2f2e] Running
	I0731 20:28:45.172272 1111910 system_pods.go:89] "kube-vip-ha-430887-m03" [53aeb41f-2430-4e51-9563-1878009bad9b] Running
	I0731 20:28:45.172276 1111910 system_pods.go:89] "storage-provisioner" [1eb16097-a994-4b42-b876-ebe7d6022be6] Running
	I0731 20:28:45.172285 1111910 system_pods.go:126] duration metric: took 210.199281ms to wait for k8s-apps to be running ...
	I0731 20:28:45.172299 1111910 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 20:28:45.172357 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:28:45.186785 1111910 system_svc.go:56] duration metric: took 14.479473ms WaitForService to wait for kubelet
	I0731 20:28:45.186815 1111910 kubeadm.go:582] duration metric: took 23.109172519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:28:45.186854 1111910 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:28:45.359282 1111910 request.go:629] Waited for 172.337834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes
	I0731 20:28:45.359353 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes
	I0731 20:28:45.359366 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:45.359374 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:45.359383 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:45.362765 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:45.363961 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:28:45.363980 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:28:45.363997 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:28:45.364004 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:28:45.364009 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:28:45.364014 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:28:45.364019 1111910 node_conditions.go:105] duration metric: took 177.156109ms to run NodePressure ...
	I0731 20:28:45.364036 1111910 start.go:241] waiting for startup goroutines ...
	I0731 20:28:45.364061 1111910 start.go:255] writing updated cluster config ...
	I0731 20:28:45.364390 1111910 ssh_runner.go:195] Run: rm -f paused
	I0731 20:28:45.415388 1111910 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 20:28:45.417430 1111910 out.go:177] * Done! kubectl is now configured to use "ha-430887" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.165578250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457940165556598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2f677dc-5e49-40c8-85d7-a0f5e1a8a46a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.166070984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25f74151-9624-4fca-8f3d-7e78a2ae331b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.166124181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25f74151-9624-4fca-8f3d-7e78a2ae331b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.166500260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457728387807943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587826789494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9,PodSandboxId:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457587771759756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587459292874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e
0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722457575608771403,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245757
2328090522,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94,PodSandboxId:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224575527
39795460,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a,PodSandboxId:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457550320375310,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457550283002451,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457550254458449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0,PodSandboxId:a2c805cc2a87b3507f9aa8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457550230452492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25f74151-9624-4fca-8f3d-7e78a2ae331b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.200358345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed63dcd1-de5d-4db2-985f-cb88649aafb0 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.200440846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed63dcd1-de5d-4db2-985f-cb88649aafb0 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.201589083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67c025d7-5f64-46f6-b05f-509712c3bcd5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.202006095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457940201986938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67c025d7-5f64-46f6-b05f-509712c3bcd5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.202536862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2ef4421-b994-45cf-84a1-44215d1f3586 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.202601390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2ef4421-b994-45cf-84a1-44215d1f3586 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.202848041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457728387807943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587826789494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9,PodSandboxId:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457587771759756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587459292874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e
0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722457575608771403,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245757
2328090522,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94,PodSandboxId:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224575527
39795460,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a,PodSandboxId:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457550320375310,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457550283002451,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457550254458449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0,PodSandboxId:a2c805cc2a87b3507f9aa8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457550230452492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2ef4421-b994-45cf-84a1-44215d1f3586 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.236256023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c726eb5-b1bd-4f13-a680-70b8fd3e96bc name=/runtime.v1.RuntimeService/Version
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.236338917Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c726eb5-b1bd-4f13-a680-70b8fd3e96bc name=/runtime.v1.RuntimeService/Version
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.237201146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=046b26ef-4a2f-4f5d-9941-9fb13a04887e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.238016751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457940237991487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=046b26ef-4a2f-4f5d-9941-9fb13a04887e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.239504680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e77fdbd-3877-4323-a303-33ee2c6a5eb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.240422675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e77fdbd-3877-4323-a303-33ee2c6a5eb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.243199450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457728387807943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587826789494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9,PodSandboxId:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457587771759756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587459292874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e
0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722457575608771403,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245757
2328090522,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94,PodSandboxId:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224575527
39795460,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a,PodSandboxId:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457550320375310,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457550283002451,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457550254458449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0,PodSandboxId:a2c805cc2a87b3507f9aa8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457550230452492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e77fdbd-3877-4323-a303-33ee2c6a5eb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.295059382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e99272e-b946-40f4-a72c-03ef4774b41d name=/runtime.v1.RuntimeService/Version
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.295192644Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e99272e-b946-40f4-a72c-03ef4774b41d name=/runtime.v1.RuntimeService/Version
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.296348560Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afab048f-ea04-4bdc-9a93-4e7220c81cbd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.296751881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457940296731805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afab048f-ea04-4bdc-9a93-4e7220c81cbd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.297424924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=934c9673-bbf2-4e9d-ba40-73f5195ece88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.297477479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=934c9673-bbf2-4e9d-ba40-73f5195ece88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:32:20 ha-430887 crio[682]: time="2024-07-31 20:32:20.297721577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457728387807943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587826789494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9,PodSandboxId:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457587771759756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587459292874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e
0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722457575608771403,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245757
2328090522,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94,PodSandboxId:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224575527
39795460,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a,PodSandboxId:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457550320375310,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457550283002451,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457550254458449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0,PodSandboxId:a2c805cc2a87b3507f9aa8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457550230452492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=934c9673-bbf2-4e9d-ba40-73f5195ece88 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b61252be77d59       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   94749dc3b8a05       busybox-fc5497c4f-tkmzn
	6804a88577bb9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   364daaeb39b2a       coredns-7db6d8ff4d-tkm49
	431be4d60e882       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   bf04533b742a0       storage-provisioner
	a3a604ebae38f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   c5096ff8ccf93       coredns-7db6d8ff4d-rhlnq
	63366667a98d5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   75a5e3ddf89ae       kindnet-xmjzn
	2c3cfe9da185a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   45f974d9fa89f       kube-proxy-m49fz
	87bc5b4c15b86       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   4e13ff1bf8383       kube-vip-ha-430887
	03b10e7eedd37       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   fad3c90ca7670       kube-apiserver-ha-430887
	019dbd42b381f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   e2bba8d22a3ce       kube-scheduler-ha-430887
	5d05fc1d45725       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   9da4629d918d3       etcd-ha-430887
	31bfc4408c834       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   a2c805cc2a87b       kube-controller-manager-ha-430887
	
	
	==> coredns [6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02] <==
	[INFO] 10.244.1.2:40160 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003254124s
	[INFO] 10.244.0.4:52726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077926s
	[INFO] 10.244.0.4:47159 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.0000495s
	[INFO] 10.244.1.2:58934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121874s
	[INFO] 10.244.1.2:43600 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179937s
	[INFO] 10.244.1.2:51933 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175849s
	[INFO] 10.244.1.2:36619 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118307s
	[INFO] 10.244.2.2:51012 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102784s
	[INFO] 10.244.2.2:46299 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151507s
	[INFO] 10.244.2.2:32857 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075858s
	[INFO] 10.244.0.4:40942 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087643s
	[INFO] 10.244.0.4:34086 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001741525s
	[INFO] 10.244.0.4:52613 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051957s
	[INFO] 10.244.0.4:48069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001210819s
	[INFO] 10.244.1.2:57723 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084885s
	[INFO] 10.244.1.2:43800 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099387s
	[INFO] 10.244.2.2:48837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134956s
	[INFO] 10.244.2.2:46133 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008076s
	[INFO] 10.244.1.2:52179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123976s
	[INFO] 10.244.1.2:38064 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121703s
	[INFO] 10.244.2.2:38356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183387s
	[INFO] 10.244.2.2:45481 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000194275s
	[INFO] 10.244.2.2:42027 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138509s
	[INFO] 10.244.2.2:47364 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140763s
	[INFO] 10.244.0.4:57224 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075497s
	
	
	==> coredns [a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a] <==
	[INFO] 10.244.1.2:58003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160037s
	[INFO] 10.244.1.2:37096 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00351051s
	[INFO] 10.244.1.2:39762 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151696s
	[INFO] 10.244.2.2:49534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.2.2:60700 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001603178s
	[INFO] 10.244.2.2:47959 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001232076s
	[INFO] 10.244.2.2:48165 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000186379s
	[INFO] 10.244.2.2:37258 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102901s
	[INFO] 10.244.0.4:51406 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128128s
	[INFO] 10.244.0.4:52718 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122292s
	[INFO] 10.244.0.4:35814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077793s
	[INFO] 10.244.0.4:57174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050499s
	[INFO] 10.244.1.2:35721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152974s
	[INFO] 10.244.1.2:52365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099511s
	[INFO] 10.244.2.2:56276 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095649s
	[INFO] 10.244.2.2:33350 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089031s
	[INFO] 10.244.0.4:39526 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089609s
	[INFO] 10.244.0.4:32892 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000036988s
	[INFO] 10.244.0.4:54821 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028078s
	[INFO] 10.244.0.4:40693 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000023261s
	[INFO] 10.244.1.2:56760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130165s
	[INFO] 10.244.1.2:49192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109643s
	[INFO] 10.244.0.4:55943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117823s
	[INFO] 10.244.0.4:40806 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010301s
	[INFO] 10.244.0.4:50703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076201s
	
	
	==> describe nodes <==
	Name:               ha-430887
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_25_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:25:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:32:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:29:00 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:29:00 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:29:00 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:29:00 +0000   Wed, 31 Jul 2024 20:26:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-430887
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d983ecff48054665b7d9523d0704c9fc
	  System UUID:                d983ecff-4805-4665-b7d9-523d0704c9fc
	  Boot ID:                    713545a1-3d19-4194-8d69-3cd83a4e4967
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tkmzn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 coredns-7db6d8ff4d-rhlnq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m9s
	  kube-system                 coredns-7db6d8ff4d-tkm49             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m9s
	  kube-system                 etcd-ha-430887                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m25s
	  kube-system                 kindnet-xmjzn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m9s
	  kube-system                 kube-apiserver-ha-430887             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-controller-manager-ha-430887    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-proxy-m49fz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-scheduler-ha-430887             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-vip-ha-430887                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m7s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m31s (x7 over 6m31s)  kubelet          Node ha-430887 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m31s (x8 over 6m31s)  kubelet          Node ha-430887 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s (x8 over 6m31s)  kubelet          Node ha-430887 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m24s                  kubelet          Node ha-430887 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s                  kubelet          Node ha-430887 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s                  kubelet          Node ha-430887 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal  NodeReady                5m54s                  kubelet          Node ha-430887 status is now: NodeReady
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	
	
	Name:               ha-430887-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_27_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:27:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:29:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 20:29:09 +0000   Wed, 31 Jul 2024 20:30:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 20:29:09 +0000   Wed, 31 Jul 2024 20:30:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 20:29:09 +0000   Wed, 31 Jul 2024 20:30:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 20:29:09 +0000   Wed, 31 Jul 2024 20:30:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-430887-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec9db720f1af4a7b8ddebc5f57826488
	  System UUID:                ec9db720-f1af-4a7b-8dde-bc5f57826488
	  Boot ID:                    97b08b0d-d235-4e8a-b4a7-e20b5af5885a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hhwcx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 etcd-ha-430887-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m13s
	  kube-system                 kindnet-49h86                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m13s
	  kube-system                 kube-apiserver-ha-430887-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-controller-manager-ha-430887-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-proxy-hsd92                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-scheduler-ha-430887-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-vip-ha-430887-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m13s (x2 over 5m13s)  kubelet          Node ha-430887-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x2 over 5m13s)  kubelet          Node ha-430887-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x2 over 5m13s)  kubelet          Node ha-430887-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  NodeReady                4m53s                  kubelet          Node ha-430887-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-430887-m02 status is now: NodeNotReady
	
	
	Name:               ha-430887-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_28_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:32:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:28:49 +0000   Wed, 31 Jul 2024 20:28:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:28:49 +0000   Wed, 31 Jul 2024 20:28:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:28:49 +0000   Wed, 31 Jul 2024 20:28:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:28:49 +0000   Wed, 31 Jul 2024 20:28:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-430887-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d94d6e3c9c5248219d2ba3137d0cbf54
	  System UUID:                d94d6e3c-9c52-4821-9d2b-a3137d0cbf54
	  Boot ID:                    12aeb95e-ca69-400d-a151-3febbd846662
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lt5n8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 etcd-ha-430887-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m57s
	  kube-system                 kindnet-fbt5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m1s
	  kube-system                 kube-apiserver-ha-430887-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-ha-430887-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-proxy-4mft2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-scheduler-ha-430887-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-vip-ha-430887-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node ha-430887-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node ha-430887-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node ha-430887-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal  RegisteredNode           3m44s                node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	
	
	Name:               ha-430887-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_29_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:29:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:32:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:29:52 +0000   Wed, 31 Jul 2024 20:29:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:29:52 +0000   Wed, 31 Jul 2024 20:29:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:29:52 +0000   Wed, 31 Jul 2024 20:29:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:29:52 +0000   Wed, 31 Jul 2024 20:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    ha-430887-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e62b3ad5cf6244ff98aa273667a5b995
	  System UUID:                e62b3ad5-cf62-44ff-98aa-273667a5b995
	  Boot ID:                    2766dd92-7fcf-4d2d-8743-67c3234050f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gg2tl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m59s
	  kube-system                 kube-proxy-8cqlp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m55s                  kube-proxy       
	  Normal  RegisteredNode           2m59s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal  NodeHasSufficientMemory  2m59s (x2 over 2m59s)  kubelet          Node ha-430887-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x2 over 2m59s)  kubelet          Node ha-430887-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x2 over 2m59s)  kubelet          Node ha-430887-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal  NodeReady                2m40s                  kubelet          Node ha-430887-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 20:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047211] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.034798] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.637543] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.671838] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.539428] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.396030] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053894] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.164850] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.142838] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.248524] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +3.814747] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.436744] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.058175] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.102873] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.077595] kauditd_printk_skb: 79 callbacks suppressed
	[Jul31 20:26] kauditd_printk_skb: 18 callbacks suppressed
	[ +24.630735] kauditd_printk_skb: 38 callbacks suppressed
	[Jul31 20:27] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338] <==
	{"level":"warn","ts":"2024-07-31T20:32:20.539913Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.546631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.550286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.550514Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.563854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.566767Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.574382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.581833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.585331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.58794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.588393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.595187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.600566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.605623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.608017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.610713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.617994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.623518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.633405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.641225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.644547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.649997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.650119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.655835Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:32:20.661752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:32:20 up 6 min,  0 users,  load average: 0.08, 0.20, 0.12
	Linux ha-430887 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d] <==
	I0731 20:31:46.559346       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:31:56.555447       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:31:56.555497       1 main.go:299] handling current node
	I0731 20:31:56.555516       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:31:56.555525       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:31:56.555665       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:31:56.555692       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:31:56.555781       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:31:56.555807       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:32:06.561880       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:32:06.561986       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:32:06.562222       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:32:06.562259       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:32:06.562330       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:32:06.562350       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:32:06.562422       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:32:06.562443       1 main.go:299] handling current node
	I0731 20:32:16.553445       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:32:16.553581       1 main.go:299] handling current node
	I0731 20:32:16.553618       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:32:16.553669       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:32:16.553861       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:32:16.553906       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:32:16.554008       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:32:16.554044       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a] <==
	I0731 20:25:55.463416       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0731 20:25:55.470202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I0731 20:25:55.471194       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:25:55.476969       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 20:25:56.052016       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 20:25:56.577915       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 20:25:56.588744       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 20:25:56.598282       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 20:26:11.513080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 20:26:11.663027       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0731 20:28:49.961390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58044: use of closed network connection
	E0731 20:28:50.140377       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58066: use of closed network connection
	E0731 20:28:50.315266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58082: use of closed network connection
	E0731 20:28:50.517398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58100: use of closed network connection
	E0731 20:28:50.694991       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33722: use of closed network connection
	E0731 20:28:50.864532       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33742: use of closed network connection
	E0731 20:28:51.035811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33774: use of closed network connection
	E0731 20:28:51.206946       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33806: use of closed network connection
	E0731 20:28:51.386884       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33828: use of closed network connection
	E0731 20:28:51.668773       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33848: use of closed network connection
	E0731 20:28:51.841520       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33864: use of closed network connection
	E0731 20:28:52.015308       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33880: use of closed network connection
	E0731 20:28:52.189900       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33890: use of closed network connection
	E0731 20:28:52.403559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33900: use of closed network connection
	E0731 20:28:52.569367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33926: use of closed network connection
	
	
	==> kube-controller-manager [31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0] <==
	I0731 20:28:46.315843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.598752ms"
	I0731 20:28:46.444818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.853042ms"
	E0731 20:28:46.444932       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:28:46.525462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.395943ms"
	I0731 20:28:46.613567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.147896ms"
	I0731 20:28:46.720384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.761956ms"
	E0731 20:28:46.721036       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:28:46.721296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="166.401µs"
	I0731 20:28:46.726483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.388µs"
	I0731 20:28:47.005667       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.946µs"
	I0731 20:28:49.170350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.876629ms"
	I0731 20:28:49.170534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.332µs"
	I0731 20:28:49.296229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.911µs"
	I0731 20:28:49.403746       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.281742ms"
	I0731 20:28:49.406351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.926µs"
	I0731 20:28:49.483106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.8971ms"
	I0731 20:28:49.483295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.169µs"
	E0731 20:29:21.293594       1 certificate_controller.go:146] Sync csr-9vxhw failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-9vxhw": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:29:21.552609       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-430887-m04\" does not exist"
	I0731 20:29:21.595879       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-430887-m04" podCIDRs=["10.244.3.0/24"]
	I0731 20:29:25.817773       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-430887-m04"
	I0731 20:29:40.246506       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-430887-m04"
	I0731 20:30:30.854112       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-430887-m04"
	I0731 20:30:30.994059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.970894ms"
	I0731 20:30:30.996192       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.596µs"
	
	
	==> kube-proxy [2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115] <==
	I0731 20:26:12.695961       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:26:12.714715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	I0731 20:26:12.753496       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:26:12.753551       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:26:12.753569       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:26:12.756334       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:26:12.756594       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:26:12.756620       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:26:12.758303       1 config.go:192] "Starting service config controller"
	I0731 20:26:12.758567       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:26:12.758618       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:26:12.758634       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:26:12.759554       1 config.go:319] "Starting node config controller"
	I0731 20:26:12.759581       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:26:12.858985       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:26:12.859016       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:26:12.859747       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756] <==
	E0731 20:28:46.289774       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b668a1b0-4434-4037-a0a1-0461e748521d(default/busybox-fc5497c4f-tkmzn) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-tkmzn"
	E0731 20:28:46.289894       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tkmzn\": pod busybox-fc5497c4f-tkmzn is already assigned to node \"ha-430887\"" pod="default/busybox-fc5497c4f-tkmzn"
	I0731 20:28:46.290007       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-tkmzn" node="ha-430887"
	E0731 20:28:46.289647       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lt5n8\": pod busybox-fc5497c4f-lt5n8 is already assigned to node \"ha-430887-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lt5n8" node="ha-430887-m03"
	E0731 20:28:46.290769       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4c829ff4-83b3-406d-8dbf-77dda232f563(default/busybox-fc5497c4f-lt5n8) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-lt5n8"
	E0731 20:28:46.290864       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lt5n8\": pod busybox-fc5497c4f-lt5n8 is already assigned to node \"ha-430887-m03\"" pod="default/busybox-fc5497c4f-lt5n8"
	I0731 20:28:46.290899       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lt5n8" node="ha-430887-m03"
	E0731 20:29:21.609655       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8cqlp\": pod kube-proxy-8cqlp is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8cqlp" node="ha-430887-m04"
	E0731 20:29:21.611396       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8cqlp\": pod kube-proxy-8cqlp is already assigned to node \"ha-430887-m04\"" pod="kube-system/kube-proxy-8cqlp"
	E0731 20:29:21.612033       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gh25l\": pod kindnet-gh25l is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gh25l" node="ha-430887-m04"
	E0731 20:29:21.612122       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3f5b9250-8827-4c9b-a14d-dc47fd5cb3bc(kube-system/kindnet-gh25l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gh25l"
	E0731 20:29:21.612289       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gh25l\": pod kindnet-gh25l is already assigned to node \"ha-430887-m04\"" pod="kube-system/kindnet-gh25l"
	I0731 20:29:21.612350       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gh25l" node="ha-430887-m04"
	E0731 20:29:21.645766       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fl7g6\": pod kube-proxy-fl7g6 is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fl7g6" node="ha-430887-m04"
	E0731 20:29:21.647529       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 53424cfa-1677-492b-aa43-9b9ab353b4de(kube-system/kube-proxy-fl7g6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fl7g6"
	E0731 20:29:21.647684       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fl7g6\": pod kube-proxy-fl7g6 is already assigned to node \"ha-430887-m04\"" pod="kube-system/kube-proxy-fl7g6"
	I0731 20:29:21.647860       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fl7g6" node="ha-430887-m04"
	E0731 20:29:21.651039       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gg2tl\": pod kindnet-gg2tl is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gg2tl" node="ha-430887-m04"
	E0731 20:29:21.651183       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6681caa0-2da7-43db-a4ec-2270d5130ba8(kube-system/kindnet-gg2tl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gg2tl"
	E0731 20:29:21.651253       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gg2tl\": pod kindnet-gg2tl is already assigned to node \"ha-430887-m04\"" pod="kube-system/kindnet-gg2tl"
	I0731 20:29:21.651291       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gg2tl" node="ha-430887-m04"
	E0731 20:29:21.743717       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-c2tw8\": pod kindnet-c2tw8 is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-c2tw8" node="ha-430887-m04"
	E0731 20:29:21.745715       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2315da31-4f29-4139-ac00-c3cc1bcd457d(kube-system/kindnet-c2tw8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-c2tw8"
	E0731 20:29:21.745783       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c2tw8\": pod kindnet-c2tw8 is already assigned to node \"ha-430887-m04\"" pod="kube-system/kindnet-c2tw8"
	I0731 20:29:21.745826       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c2tw8" node="ha-430887-m04"
	
	
	==> kubelet <==
	Jul 31 20:27:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:27:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:28:46 ha-430887 kubelet[1378]: I0731 20:28:46.274596    1378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=154.274571787 podStartE2EDuration="2m34.274571787s" podCreationTimestamp="2024-07-31 20:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 20:26:28.70152693 +0000 UTC m=+32.367819517" watchObservedRunningTime="2024-07-31 20:28:46.274571787 +0000 UTC m=+169.940864376"
	Jul 31 20:28:46 ha-430887 kubelet[1378]: I0731 20:28:46.274848    1378 topology_manager.go:215] "Topology Admit Handler" podUID="b668a1b0-4434-4037-a0a1-0461e748521d" podNamespace="default" podName="busybox-fc5497c4f-tkmzn"
	Jul 31 20:28:46 ha-430887 kubelet[1378]: I0731 20:28:46.410876    1378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s5s\" (UniqueName: \"kubernetes.io/projected/b668a1b0-4434-4037-a0a1-0461e748521d-kube-api-access-z5s5s\") pod \"busybox-fc5497c4f-tkmzn\" (UID: \"b668a1b0-4434-4037-a0a1-0461e748521d\") " pod="default/busybox-fc5497c4f-tkmzn"
	Jul 31 20:28:56 ha-430887 kubelet[1378]: E0731 20:28:56.468622    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:28:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:28:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:28:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:28:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:29:56 ha-430887 kubelet[1378]: E0731 20:29:56.467064    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:29:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:29:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:29:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:29:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:30:56 ha-430887 kubelet[1378]: E0731 20:30:56.467405    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:30:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:30:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:30:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:30:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:31:56 ha-430887 kubelet[1378]: E0731 20:31:56.466743    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:31:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:31:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:31:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:31:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430887 -n ha-430887
helpers_test.go:261: (dbg) Run:  kubectl --context ha-430887 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 3 (3.18701186s)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:32:25.196558 1116754 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:32:25.196686 1116754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:25.196695 1116754 out.go:304] Setting ErrFile to fd 2...
	I0731 20:32:25.196699 1116754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:25.196886 1116754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:32:25.197097 1116754 out.go:298] Setting JSON to false
	I0731 20:32:25.197130 1116754 mustload.go:65] Loading cluster: ha-430887
	I0731 20:32:25.197177 1116754 notify.go:220] Checking for updates...
	I0731 20:32:25.197487 1116754 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:32:25.197501 1116754 status.go:255] checking status of ha-430887 ...
	I0731 20:32:25.197848 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:25.197917 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:25.213686 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0731 20:32:25.214200 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:25.214830 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:25.214860 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:25.215213 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:25.215397 1116754 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:32:25.216988 1116754 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:32:25.217017 1116754 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:25.217304 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:25.217341 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:25.233782 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37143
	I0731 20:32:25.234287 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:25.234827 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:25.234865 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:25.235359 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:25.235724 1116754 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:32:25.239337 1116754 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:25.239818 1116754 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:25.239858 1116754 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:25.240168 1116754 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:25.240617 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:25.240667 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:25.259590 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33009
	I0731 20:32:25.260066 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:25.260682 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:25.260712 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:25.261043 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:25.261277 1116754 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:32:25.261525 1116754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:25.261559 1116754 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:32:25.266369 1116754 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:25.267019 1116754 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:25.267049 1116754 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:25.267264 1116754 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:32:25.267509 1116754 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:32:25.267666 1116754 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:32:25.267829 1116754 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:32:25.359464 1116754 ssh_runner.go:195] Run: systemctl --version
	I0731 20:32:25.366110 1116754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:25.383142 1116754 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:25.383177 1116754 api_server.go:166] Checking apiserver status ...
	I0731 20:32:25.383217 1116754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:25.398649 1116754 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:32:25.409830 1116754 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:25.409895 1116754 ssh_runner.go:195] Run: ls
	I0731 20:32:25.414272 1116754 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:25.418735 1116754 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:25.418763 1116754 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:32:25.418774 1116754 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:25.418816 1116754 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:32:25.419097 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:25.419125 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:25.434182 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33849
	I0731 20:32:25.434578 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:25.435076 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:25.435104 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:25.435478 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:25.435701 1116754 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:32:25.437324 1116754 status.go:330] ha-430887-m02 host status = "Running" (err=<nil>)
	I0731 20:32:25.437344 1116754 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:25.437688 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:25.437718 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:25.452141 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0731 20:32:25.452591 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:25.453041 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:25.453059 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:25.453379 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:25.453590 1116754 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:32:25.456464 1116754 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:25.456983 1116754 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:25.457010 1116754 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:25.457141 1116754 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:25.457427 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:25.457465 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:25.471832 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45017
	I0731 20:32:25.472391 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:25.472945 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:25.472967 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:25.473302 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:25.473496 1116754 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:32:25.473688 1116754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:25.473710 1116754 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:32:25.476397 1116754 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:25.476839 1116754 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:25.476864 1116754 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:25.477004 1116754 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:32:25.477201 1116754 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:32:25.477363 1116754 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:32:25.477513 1116754 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	W0731 20:32:28.000409 1116754 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:28.000522 1116754 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	E0731 20:32:28.000540 1116754 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:28.000550 1116754 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:32:28.000573 1116754 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:28.000591 1116754 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:32:28.000906 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:28.000977 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:28.016283 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39409
	I0731 20:32:28.016741 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:28.017244 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:28.017269 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:28.017563 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:28.017766 1116754 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:32:28.019161 1116754 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:32:28.019182 1116754 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:28.019496 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:28.019527 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:28.036142 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42821
	I0731 20:32:28.036570 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:28.037069 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:28.037094 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:28.037444 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:28.037637 1116754 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:32:28.040379 1116754 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:28.040825 1116754 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:28.040853 1116754 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:28.040992 1116754 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:28.041354 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:28.041394 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:28.056272 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0731 20:32:28.056758 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:28.057201 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:28.057221 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:28.057475 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:28.057702 1116754 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:32:28.057913 1116754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:28.057938 1116754 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:32:28.060818 1116754 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:28.061277 1116754 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:28.061297 1116754 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:28.061426 1116754 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:32:28.061607 1116754 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:32:28.061762 1116754 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:32:28.061888 1116754 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:32:28.139796 1116754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:28.152959 1116754 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:28.152994 1116754 api_server.go:166] Checking apiserver status ...
	I0731 20:32:28.153032 1116754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:28.165367 1116754 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:32:28.173762 1116754 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:28.173812 1116754 ssh_runner.go:195] Run: ls
	I0731 20:32:28.177768 1116754 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:28.182085 1116754 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:28.182114 1116754 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:32:28.182124 1116754 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:28.182142 1116754 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:32:28.182704 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:28.182743 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:28.198908 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0731 20:32:28.199326 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:28.199878 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:28.199901 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:28.200287 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:28.200488 1116754 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:32:28.202427 1116754 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:32:28.202447 1116754 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:28.202874 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:28.202905 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:28.218622 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39447
	I0731 20:32:28.219111 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:28.219619 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:28.219639 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:28.219971 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:28.220223 1116754 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:32:28.222948 1116754 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:28.223415 1116754 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:28.223456 1116754 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:28.223530 1116754 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:28.223871 1116754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:28.223912 1116754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:28.239631 1116754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0731 20:32:28.240123 1116754 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:28.240640 1116754 main.go:141] libmachine: Using API Version  1
	I0731 20:32:28.240664 1116754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:28.241017 1116754 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:28.241225 1116754 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:32:28.241452 1116754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:28.241475 1116754 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:32:28.244507 1116754 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:28.245062 1116754 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:28.245085 1116754 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:28.245293 1116754 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:32:28.245474 1116754 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:32:28.245642 1116754 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:32:28.245820 1116754 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:32:28.323584 1116754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:28.336801 1116754 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 3 (2.442033092s)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:32:29.008819 1116855 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:32:29.008930 1116855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:29.008941 1116855 out.go:304] Setting ErrFile to fd 2...
	I0731 20:32:29.008945 1116855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:29.009144 1116855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:32:29.009305 1116855 out.go:298] Setting JSON to false
	I0731 20:32:29.009328 1116855 mustload.go:65] Loading cluster: ha-430887
	I0731 20:32:29.009436 1116855 notify.go:220] Checking for updates...
	I0731 20:32:29.009775 1116855 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:32:29.009796 1116855 status.go:255] checking status of ha-430887 ...
	I0731 20:32:29.010140 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:29.010194 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:29.030613 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32953
	I0731 20:32:29.031041 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:29.031597 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:29.031626 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:29.031998 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:29.032268 1116855 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:32:29.033920 1116855 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:32:29.033943 1116855 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:29.034234 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:29.034280 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:29.049668 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0731 20:32:29.050206 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:29.050698 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:29.050718 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:29.051169 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:29.051349 1116855 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:32:29.054316 1116855 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:29.054738 1116855 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:29.054765 1116855 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:29.054947 1116855 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:29.055210 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:29.055255 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:29.071985 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I0731 20:32:29.072419 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:29.072838 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:29.072858 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:29.073185 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:29.073400 1116855 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:32:29.073602 1116855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:29.073627 1116855 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:32:29.076517 1116855 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:29.076979 1116855 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:29.077006 1116855 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:29.077136 1116855 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:32:29.077294 1116855 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:32:29.077452 1116855 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:32:29.077643 1116855 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:32:29.159341 1116855 ssh_runner.go:195] Run: systemctl --version
	I0731 20:32:29.165541 1116855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:29.179190 1116855 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:29.179215 1116855 api_server.go:166] Checking apiserver status ...
	I0731 20:32:29.179252 1116855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:29.193377 1116855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:32:29.202111 1116855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:29.202166 1116855 ssh_runner.go:195] Run: ls
	I0731 20:32:29.205998 1116855 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:29.212015 1116855 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:29.212043 1116855 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:32:29.212055 1116855 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:29.212082 1116855 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:32:29.212413 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:29.212440 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:29.228002 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I0731 20:32:29.228421 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:29.229001 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:29.229028 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:29.229398 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:29.229607 1116855 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:32:29.231019 1116855 status.go:330] ha-430887-m02 host status = "Running" (err=<nil>)
	I0731 20:32:29.231038 1116855 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:29.231380 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:29.231410 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:29.245944 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I0731 20:32:29.246371 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:29.246883 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:29.246905 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:29.247236 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:29.247465 1116855 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:32:29.250417 1116855 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:29.250832 1116855 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:29.250860 1116855 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:29.250988 1116855 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:29.251322 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:29.251364 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:29.266930 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0731 20:32:29.267331 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:29.267764 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:29.267788 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:29.268116 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:29.268318 1116855 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:32:29.268499 1116855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:29.268521 1116855 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:32:29.270876 1116855 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:29.271300 1116855 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:29.271322 1116855 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:29.271503 1116855 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:32:29.271662 1116855 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:32:29.271820 1116855 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:32:29.271983 1116855 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	W0731 20:32:31.072443 1116855 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:31.072552 1116855 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	E0731 20:32:31.072570 1116855 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:31.072579 1116855 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:32:31.072598 1116855 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:31.072620 1116855 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:32:31.072936 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:31.072988 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:31.089327 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36819
	I0731 20:32:31.089838 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:31.090422 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:31.090456 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:31.090774 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:31.090983 1116855 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:32:31.092588 1116855 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:32:31.092614 1116855 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:31.092909 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:31.092946 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:31.108879 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39847
	I0731 20:32:31.109312 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:31.109747 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:31.109772 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:31.110102 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:31.110283 1116855 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:32:31.113115 1116855 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:31.113482 1116855 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:31.113496 1116855 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:31.113708 1116855 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:31.114046 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:31.114083 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:31.129857 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35797
	I0731 20:32:31.130290 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:31.130774 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:31.130794 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:31.131111 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:31.131338 1116855 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:32:31.131535 1116855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:31.131562 1116855 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:32:31.134301 1116855 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:31.134770 1116855 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:31.134807 1116855 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:31.134896 1116855 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:32:31.135063 1116855 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:32:31.135215 1116855 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:32:31.135356 1116855 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:32:31.210878 1116855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:31.224332 1116855 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:31.224363 1116855 api_server.go:166] Checking apiserver status ...
	I0731 20:32:31.224397 1116855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:31.236844 1116855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:32:31.245463 1116855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:31.245535 1116855 ssh_runner.go:195] Run: ls
	I0731 20:32:31.249605 1116855 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:31.253672 1116855 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:31.253714 1116855 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:32:31.253730 1116855 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:31.253747 1116855 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:32:31.254126 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:31.254167 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:31.269592 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I0731 20:32:31.270011 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:31.270569 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:31.270590 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:31.270931 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:31.271142 1116855 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:32:31.273016 1116855 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:32:31.273034 1116855 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:31.273322 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:31.273367 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:31.289248 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0731 20:32:31.289747 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:31.290241 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:31.290264 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:31.290580 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:31.290779 1116855 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:32:31.293098 1116855 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:31.293609 1116855 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:31.293633 1116855 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:31.293812 1116855 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:31.294104 1116855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:31.294126 1116855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:31.308845 1116855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0731 20:32:31.309280 1116855 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:31.309764 1116855 main.go:141] libmachine: Using API Version  1
	I0731 20:32:31.309787 1116855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:31.310099 1116855 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:31.310283 1116855 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:32:31.310470 1116855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:31.310489 1116855 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:32:31.313204 1116855 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:31.313615 1116855 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:31.313666 1116855 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:31.313805 1116855 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:32:31.313996 1116855 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:32:31.314175 1116855 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:32:31.314336 1116855 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:32:31.390694 1116855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:31.403733 1116855 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 3 (5.250348067s)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:32:32.349425 1116940 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:32:32.349577 1116940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:32.349589 1116940 out.go:304] Setting ErrFile to fd 2...
	I0731 20:32:32.349595 1116940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:32.349817 1116940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:32:32.350025 1116940 out.go:298] Setting JSON to false
	I0731 20:32:32.350065 1116940 mustload.go:65] Loading cluster: ha-430887
	I0731 20:32:32.350180 1116940 notify.go:220] Checking for updates...
	I0731 20:32:32.350506 1116940 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:32:32.350525 1116940 status.go:255] checking status of ha-430887 ...
	I0731 20:32:32.351053 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:32.351125 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:32.366779 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40843
	I0731 20:32:32.367321 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:32.367929 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:32.367953 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:32.368293 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:32.368453 1116940 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:32:32.370016 1116940 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:32:32.370042 1116940 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:32.370320 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:32.370352 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:32.386268 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39317
	I0731 20:32:32.386689 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:32.387162 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:32.387197 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:32.387518 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:32.387728 1116940 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:32:32.390703 1116940 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:32.391101 1116940 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:32.391128 1116940 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:32.391244 1116940 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:32.391540 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:32.391577 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:32.408039 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44821
	I0731 20:32:32.408466 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:32.409138 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:32.409165 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:32.409508 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:32.409714 1116940 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:32:32.409898 1116940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:32.409924 1116940 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:32:32.413242 1116940 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:32.413664 1116940 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:32.413691 1116940 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:32.413941 1116940 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:32:32.414116 1116940 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:32:32.414267 1116940 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:32:32.414423 1116940 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:32:32.496180 1116940 ssh_runner.go:195] Run: systemctl --version
	I0731 20:32:32.501869 1116940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:32.517088 1116940 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:32.517117 1116940 api_server.go:166] Checking apiserver status ...
	I0731 20:32:32.517148 1116940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:32.529961 1116940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:32:32.538734 1116940 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:32.538789 1116940 ssh_runner.go:195] Run: ls
	I0731 20:32:32.543249 1116940 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:32.547142 1116940 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:32.547164 1116940 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:32:32.547186 1116940 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:32.547214 1116940 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:32:32.547619 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:32.547652 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:32.563461 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37723
	I0731 20:32:32.563921 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:32.564486 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:32.564537 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:32.564951 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:32.565182 1116940 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:32:32.566786 1116940 status.go:330] ha-430887-m02 host status = "Running" (err=<nil>)
	I0731 20:32:32.566805 1116940 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:32.567131 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:32.567154 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:32.583666 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0731 20:32:32.584085 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:32.584600 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:32.584620 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:32.584931 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:32.585145 1116940 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:32:32.587870 1116940 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:32.588320 1116940 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:32.588361 1116940 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:32.588440 1116940 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:32.588754 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:32.588779 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:32.604063 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45755
	I0731 20:32:32.604472 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:32.604959 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:32.604978 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:32.605355 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:32.605606 1116940 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:32:32.605796 1116940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:32.605816 1116940 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:32:32.608266 1116940 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:32.608727 1116940 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:32.608765 1116940 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:32.608951 1116940 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:32:32.609147 1116940 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:32:32.609288 1116940 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:32:32.609409 1116940 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	W0731 20:32:34.144456 1116940 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:34.144546 1116940 retry.go:31] will retry after 268.486925ms: dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:37.216441 1116940 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:37.216541 1116940 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	E0731 20:32:37.216558 1116940 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:37.216591 1116940 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:32:37.216626 1116940 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:37.216634 1116940 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:32:37.217095 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:37.217155 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:37.233191 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0731 20:32:37.233662 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:37.234199 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:37.234220 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:37.234573 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:37.234773 1116940 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:32:37.236514 1116940 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:32:37.236539 1116940 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:37.236918 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:37.236964 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:37.251874 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I0731 20:32:37.252280 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:37.252722 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:37.252742 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:37.253056 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:37.253271 1116940 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:32:37.256143 1116940 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:37.256533 1116940 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:37.256564 1116940 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:37.256663 1116940 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:37.256981 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:37.257033 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:37.271419 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I0731 20:32:37.271800 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:37.272230 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:37.272252 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:37.272571 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:37.272751 1116940 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:32:37.272961 1116940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:37.272981 1116940 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:32:37.275447 1116940 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:37.275883 1116940 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:37.275913 1116940 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:37.276029 1116940 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:32:37.276221 1116940 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:32:37.276419 1116940 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:32:37.276570 1116940 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:32:37.355867 1116940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:37.370444 1116940 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:37.370473 1116940 api_server.go:166] Checking apiserver status ...
	I0731 20:32:37.370506 1116940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:37.382296 1116940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:32:37.390944 1116940 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:37.391003 1116940 ssh_runner.go:195] Run: ls
	I0731 20:32:37.394789 1116940 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:37.398909 1116940 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:37.398928 1116940 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:32:37.398936 1116940 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:37.398951 1116940 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:32:37.399251 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:37.399278 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:37.415183 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0731 20:32:37.415668 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:37.416157 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:37.416183 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:37.416564 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:37.416793 1116940 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:32:37.418567 1116940 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:32:37.418584 1116940 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:37.418938 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:37.418976 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:37.433920 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0731 20:32:37.434390 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:37.434875 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:37.434894 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:37.435177 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:37.435359 1116940 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:32:37.438017 1116940 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:37.438515 1116940 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:37.438540 1116940 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:37.438717 1116940 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:37.439083 1116940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:37.439124 1116940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:37.454104 1116940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37777
	I0731 20:32:37.454637 1116940 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:37.455159 1116940 main.go:141] libmachine: Using API Version  1
	I0731 20:32:37.455177 1116940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:37.455466 1116940 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:37.455740 1116940 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:32:37.455949 1116940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:37.455974 1116940 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:32:37.458841 1116940 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:37.459275 1116940 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:37.459305 1116940 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:37.459427 1116940 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:32:37.459607 1116940 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:32:37.459773 1116940 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:32:37.459913 1116940 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:32:37.538568 1116940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:37.551676 1116940 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 3 (3.720518291s)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:32:40.167724 1117058 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:32:40.167870 1117058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:40.167883 1117058 out.go:304] Setting ErrFile to fd 2...
	I0731 20:32:40.167889 1117058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:40.168050 1117058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:32:40.168241 1117058 out.go:298] Setting JSON to false
	I0731 20:32:40.168269 1117058 mustload.go:65] Loading cluster: ha-430887
	I0731 20:32:40.168310 1117058 notify.go:220] Checking for updates...
	I0731 20:32:40.168782 1117058 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:32:40.168806 1117058 status.go:255] checking status of ha-430887 ...
	I0731 20:32:40.169311 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:40.169384 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:40.189515 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0731 20:32:40.189932 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:40.190692 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:40.190719 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:40.191078 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:40.191314 1117058 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:32:40.193076 1117058 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:32:40.193106 1117058 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:40.193378 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:40.193412 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:40.209291 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36319
	I0731 20:32:40.209738 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:40.210284 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:40.210309 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:40.210630 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:40.210861 1117058 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:32:40.213865 1117058 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:40.214350 1117058 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:40.214372 1117058 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:40.214506 1117058 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:40.214798 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:40.214843 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:40.230980 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38677
	I0731 20:32:40.231389 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:40.231843 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:40.231867 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:40.232217 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:40.232379 1117058 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:32:40.232570 1117058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:40.232608 1117058 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:32:40.235569 1117058 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:40.236056 1117058 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:40.236086 1117058 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:40.236266 1117058 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:32:40.236440 1117058 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:32:40.236597 1117058 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:32:40.236731 1117058 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:32:40.319463 1117058 ssh_runner.go:195] Run: systemctl --version
	I0731 20:32:40.325396 1117058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:40.339018 1117058 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:40.339057 1117058 api_server.go:166] Checking apiserver status ...
	I0731 20:32:40.339099 1117058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:40.351210 1117058 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:32:40.359262 1117058 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:40.359319 1117058 ssh_runner.go:195] Run: ls
	I0731 20:32:40.363323 1117058 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:40.367727 1117058 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:40.367746 1117058 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:32:40.367757 1117058 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:40.367774 1117058 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:32:40.368104 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:40.368150 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:40.383997 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0731 20:32:40.384479 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:40.385046 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:40.385069 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:40.385418 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:40.385634 1117058 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:32:40.387282 1117058 status.go:330] ha-430887-m02 host status = "Running" (err=<nil>)
	I0731 20:32:40.387304 1117058 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:40.387639 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:40.387668 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:40.402751 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I0731 20:32:40.403171 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:40.403661 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:40.403684 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:40.404018 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:40.404256 1117058 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:32:40.406777 1117058 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:40.407202 1117058 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:40.407226 1117058 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:40.407339 1117058 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:40.407671 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:40.407735 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:40.422265 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I0731 20:32:40.422618 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:40.423088 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:40.423109 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:40.423394 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:40.423590 1117058 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:32:40.423769 1117058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:40.423789 1117058 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:32:40.426920 1117058 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:40.427392 1117058 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:40.427418 1117058 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:40.427565 1117058 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:32:40.427739 1117058 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:32:40.427902 1117058 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:32:40.428013 1117058 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	W0731 20:32:43.488361 1117058 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:43.488496 1117058 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	E0731 20:32:43.488519 1117058 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:43.488533 1117058 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:32:43.488552 1117058 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:43.488564 1117058 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:32:43.488907 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:43.488980 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:43.504274 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0731 20:32:43.504757 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:43.505286 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:43.505318 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:43.505672 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:43.505873 1117058 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:32:43.507828 1117058 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:32:43.507851 1117058 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:43.508184 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:43.508234 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:43.524061 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42231
	I0731 20:32:43.524595 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:43.525146 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:43.525168 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:43.525482 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:43.525695 1117058 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:32:43.528426 1117058 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:43.528916 1117058 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:43.528947 1117058 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:43.529107 1117058 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:43.529409 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:43.529439 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:43.545754 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45495
	I0731 20:32:43.546136 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:43.546576 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:43.546603 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:43.546929 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:43.547080 1117058 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:32:43.547299 1117058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:43.547325 1117058 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:32:43.549947 1117058 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:43.550354 1117058 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:43.550380 1117058 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:43.550529 1117058 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:32:43.550714 1117058 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:32:43.550866 1117058 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:32:43.551013 1117058 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:32:43.635735 1117058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:43.652200 1117058 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:43.652247 1117058 api_server.go:166] Checking apiserver status ...
	I0731 20:32:43.652291 1117058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:43.667144 1117058 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:32:43.678177 1117058 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:43.678236 1117058 ssh_runner.go:195] Run: ls
	I0731 20:32:43.683096 1117058 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:43.687780 1117058 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:43.687806 1117058 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:32:43.687821 1117058 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:43.687841 1117058 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:32:43.688211 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:43.688242 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:43.703374 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0731 20:32:43.703871 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:43.704362 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:43.704382 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:43.704690 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:43.704849 1117058 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:32:43.706111 1117058 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:32:43.706126 1117058 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:43.706418 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:43.706452 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:43.721903 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0731 20:32:43.722303 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:43.722756 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:43.722780 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:43.723145 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:43.723336 1117058 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:32:43.726144 1117058 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:43.726653 1117058 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:43.726684 1117058 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:43.726876 1117058 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:43.727186 1117058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:43.727224 1117058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:43.742789 1117058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33439
	I0731 20:32:43.743202 1117058 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:43.743665 1117058 main.go:141] libmachine: Using API Version  1
	I0731 20:32:43.743683 1117058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:43.744036 1117058 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:43.744278 1117058 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:32:43.744470 1117058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:43.744503 1117058 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:32:43.747153 1117058 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:43.747628 1117058 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:43.747654 1117058 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:43.747749 1117058 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:32:43.747934 1117058 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:32:43.748066 1117058 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:32:43.748280 1117058 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:32:43.827441 1117058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:43.841213 1117058 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 3 (4.268406813s)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:32:45.951085 1117158 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:32:45.951333 1117158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:45.951342 1117158 out.go:304] Setting ErrFile to fd 2...
	I0731 20:32:45.951346 1117158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:45.951538 1117158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:32:45.951699 1117158 out.go:298] Setting JSON to false
	I0731 20:32:45.951720 1117158 mustload.go:65] Loading cluster: ha-430887
	I0731 20:32:45.951839 1117158 notify.go:220] Checking for updates...
	I0731 20:32:45.952142 1117158 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:32:45.952158 1117158 status.go:255] checking status of ha-430887 ...
	I0731 20:32:45.952550 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:45.952610 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:45.973538 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0731 20:32:45.974146 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:45.974787 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:45.974810 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:45.975258 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:45.975468 1117158 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:32:45.977282 1117158 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:32:45.977303 1117158 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:45.977600 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:45.977645 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:45.995928 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0731 20:32:45.996427 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:45.997008 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:45.997036 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:45.997536 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:45.997753 1117158 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:32:46.000825 1117158 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:46.001279 1117158 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:46.001317 1117158 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:46.001436 1117158 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:46.001747 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:46.001794 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:46.018510 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0731 20:32:46.018936 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:46.019456 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:46.019483 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:46.019862 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:46.020128 1117158 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:32:46.020334 1117158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:46.020369 1117158 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:32:46.023103 1117158 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:46.023530 1117158 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:46.023549 1117158 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:46.023686 1117158 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:32:46.023860 1117158 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:32:46.024019 1117158 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:32:46.024174 1117158 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:32:46.102991 1117158 ssh_runner.go:195] Run: systemctl --version
	I0731 20:32:46.108939 1117158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:46.121911 1117158 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:46.121942 1117158 api_server.go:166] Checking apiserver status ...
	I0731 20:32:46.121975 1117158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:46.134873 1117158 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:32:46.143406 1117158 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:46.143465 1117158 ssh_runner.go:195] Run: ls
	I0731 20:32:46.147304 1117158 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:46.152859 1117158 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:46.152895 1117158 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:32:46.152910 1117158 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:46.152938 1117158 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:32:46.153266 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:46.153290 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:46.168449 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I0731 20:32:46.168893 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:46.169310 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:46.169324 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:46.169679 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:46.169857 1117158 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:32:46.171421 1117158 status.go:330] ha-430887-m02 host status = "Running" (err=<nil>)
	I0731 20:32:46.171446 1117158 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:46.171831 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:46.171886 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:46.186560 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0731 20:32:46.187012 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:46.187484 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:46.187507 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:46.187833 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:46.188010 1117158 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:32:46.190757 1117158 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:46.191213 1117158 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:46.191253 1117158 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:46.191381 1117158 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:46.191678 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:46.191713 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:46.206558 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0731 20:32:46.207029 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:46.207555 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:46.207578 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:46.207930 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:46.208166 1117158 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:32:46.208352 1117158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:46.208373 1117158 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:32:46.210923 1117158 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:46.211307 1117158 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:46.211340 1117158 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:46.211473 1117158 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:32:46.211649 1117158 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:32:46.211804 1117158 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:32:46.211928 1117158 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	W0731 20:32:46.560317 1117158 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:46.560370 1117158 retry.go:31] will retry after 189.56154ms: dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:49.824373 1117158 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:49.824463 1117158 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	E0731 20:32:49.824514 1117158 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:49.824525 1117158 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:32:49.824544 1117158 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:49.824551 1117158 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:32:49.824930 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:49.824979 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:49.840593 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
	I0731 20:32:49.841154 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:49.841654 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:49.841678 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:49.841993 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:49.842204 1117158 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:32:49.843882 1117158 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:32:49.843903 1117158 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:49.844251 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:49.844299 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:49.858840 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0731 20:32:49.859286 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:49.859911 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:49.859935 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:49.860293 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:49.860473 1117158 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:32:49.863207 1117158 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:49.863593 1117158 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:49.863614 1117158 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:49.863773 1117158 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:49.864222 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:49.864268 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:49.878647 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39795
	I0731 20:32:49.879127 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:49.879691 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:49.879719 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:49.880083 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:49.880291 1117158 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:32:49.880495 1117158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:49.880521 1117158 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:32:49.883188 1117158 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:49.883620 1117158 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:49.883658 1117158 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:49.883852 1117158 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:32:49.884031 1117158 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:32:49.884181 1117158 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:32:49.884340 1117158 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:32:49.967240 1117158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:49.984440 1117158 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:49.984479 1117158 api_server.go:166] Checking apiserver status ...
	I0731 20:32:49.984537 1117158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:50.003716 1117158 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:32:50.013015 1117158 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:50.013069 1117158 ssh_runner.go:195] Run: ls
	I0731 20:32:50.017256 1117158 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:50.021664 1117158 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:50.021690 1117158 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:32:50.021702 1117158 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:50.021739 1117158 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:32:50.022037 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:50.022069 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:50.039739 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0731 20:32:50.040226 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:50.040763 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:50.040795 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:50.041081 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:50.041296 1117158 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:32:50.042889 1117158 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:32:50.042909 1117158 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:50.043261 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:50.043296 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:50.057781 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I0731 20:32:50.058143 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:50.058589 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:50.058605 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:50.058906 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:50.059076 1117158 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:32:50.061848 1117158 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:50.062279 1117158 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:50.062306 1117158 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:50.062443 1117158 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:50.062733 1117158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:50.062767 1117158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:50.077714 1117158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I0731 20:32:50.078128 1117158 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:50.078611 1117158 main.go:141] libmachine: Using API Version  1
	I0731 20:32:50.078632 1117158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:50.078963 1117158 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:50.079166 1117158 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:32:50.079341 1117158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:50.079365 1117158 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:32:50.082044 1117158 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:50.082465 1117158 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:50.082489 1117158 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:50.082700 1117158 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:32:50.082888 1117158 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:32:50.083057 1117158 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:32:50.083220 1117158 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:32:50.158857 1117158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:50.172817 1117158 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 3 (3.719529831s)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:32:55.419454 1117274 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:32:55.419709 1117274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:55.419718 1117274 out.go:304] Setting ErrFile to fd 2...
	I0731 20:32:55.419722 1117274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:32:55.419923 1117274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:32:55.420081 1117274 out.go:298] Setting JSON to false
	I0731 20:32:55.420129 1117274 mustload.go:65] Loading cluster: ha-430887
	I0731 20:32:55.420173 1117274 notify.go:220] Checking for updates...
	I0731 20:32:55.420547 1117274 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:32:55.420571 1117274 status.go:255] checking status of ha-430887 ...
	I0731 20:32:55.421037 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:55.421109 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:55.441761 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0731 20:32:55.442330 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:55.442950 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:55.442975 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:55.443376 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:55.443573 1117274 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:32:55.445448 1117274 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:32:55.445471 1117274 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:55.445789 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:55.445823 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:55.460646 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46745
	I0731 20:32:55.461032 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:55.461456 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:55.461475 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:55.461862 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:55.462077 1117274 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:32:55.464827 1117274 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:55.465267 1117274 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:55.465299 1117274 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:55.465439 1117274 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:32:55.465738 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:55.465781 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:55.481544 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I0731 20:32:55.482079 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:55.482491 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:55.482519 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:55.482938 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:55.483138 1117274 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:32:55.483383 1117274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:55.483422 1117274 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:32:55.486190 1117274 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:55.486635 1117274 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:32:55.486664 1117274 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:32:55.486793 1117274 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:32:55.486959 1117274 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:32:55.487126 1117274 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:32:55.487282 1117274 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:32:55.575878 1117274 ssh_runner.go:195] Run: systemctl --version
	I0731 20:32:55.582571 1117274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:55.595908 1117274 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:55.595938 1117274 api_server.go:166] Checking apiserver status ...
	I0731 20:32:55.595978 1117274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:55.608636 1117274 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:32:55.617065 1117274 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:55.617110 1117274 ssh_runner.go:195] Run: ls
	I0731 20:32:55.621390 1117274 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:55.627471 1117274 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:55.627495 1117274 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:32:55.627531 1117274 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:55.627557 1117274 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:32:55.627856 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:55.627893 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:55.643500 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43745
	I0731 20:32:55.643905 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:55.644510 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:55.644538 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:55.644916 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:55.645133 1117274 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:32:55.646893 1117274 status.go:330] ha-430887-m02 host status = "Running" (err=<nil>)
	I0731 20:32:55.646913 1117274 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:55.647329 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:55.647376 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:55.662180 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46125
	I0731 20:32:55.662626 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:55.663082 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:55.663104 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:55.663490 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:55.663734 1117274 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:32:55.666486 1117274 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:55.666890 1117274 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:55.666914 1117274 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:55.667050 1117274 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:32:55.667386 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:55.667438 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:55.682079 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I0731 20:32:55.682428 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:55.682849 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:55.682870 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:55.683175 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:55.683366 1117274 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:32:55.683532 1117274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:55.683560 1117274 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:32:55.686168 1117274 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:55.686602 1117274 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:32:55.686622 1117274 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:32:55.686773 1117274 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:32:55.686924 1117274 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:32:55.687066 1117274 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:32:55.687214 1117274 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	W0731 20:32:58.752395 1117274 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.149:22: connect: no route to host
	W0731 20:32:58.752483 1117274 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	E0731 20:32:58.752497 1117274 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:58.752507 1117274 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:32:58.752531 1117274 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.149:22: connect: no route to host
	I0731 20:32:58.752539 1117274 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:32:58.752887 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:58.752934 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:58.768124 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0731 20:32:58.768628 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:58.769176 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:58.769199 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:58.769528 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:58.769731 1117274 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:32:58.771447 1117274 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:32:58.771465 1117274 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:58.771787 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:58.771826 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:58.786889 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37457
	I0731 20:32:58.787344 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:58.787796 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:58.787825 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:58.788203 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:58.788407 1117274 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:32:58.791209 1117274 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:58.791640 1117274 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:58.791662 1117274 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:58.791798 1117274 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:32:58.792233 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:58.792282 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:58.806872 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
	I0731 20:32:58.807309 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:58.807782 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:58.807818 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:58.808128 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:58.808311 1117274 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:32:58.808491 1117274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:58.808511 1117274 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:32:58.811195 1117274 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:58.811641 1117274 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:32:58.811668 1117274 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:32:58.811818 1117274 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:32:58.811982 1117274 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:32:58.812141 1117274 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:32:58.812291 1117274 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:32:58.892163 1117274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:58.906035 1117274 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:32:58.906064 1117274 api_server.go:166] Checking apiserver status ...
	I0731 20:32:58.906097 1117274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:32:58.918926 1117274 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:32:58.928063 1117274 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:32:58.928123 1117274 ssh_runner.go:195] Run: ls
	I0731 20:32:58.932104 1117274 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:32:58.937885 1117274 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:32:58.937911 1117274 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:32:58.937923 1117274 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:32:58.937943 1117274 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:32:58.938333 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:58.938381 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:58.954596 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43657
	I0731 20:32:58.955056 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:58.955571 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:58.955598 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:58.955919 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:58.956129 1117274 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:32:58.957761 1117274 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:32:58.957791 1117274 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:58.958106 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:58.958141 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:58.973484 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0731 20:32:58.973949 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:58.974456 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:58.974478 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:58.974889 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:58.975097 1117274 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:32:58.978472 1117274 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:58.979008 1117274 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:58.979037 1117274 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:58.979186 1117274 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:32:58.979616 1117274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:32:58.979669 1117274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:32:58.997318 1117274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
	I0731 20:32:58.997760 1117274 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:32:58.998351 1117274 main.go:141] libmachine: Using API Version  1
	I0731 20:32:58.998379 1117274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:32:58.998700 1117274 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:32:58.998923 1117274 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:32:58.999128 1117274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:32:58.999150 1117274 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:32:59.002045 1117274 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:59.002504 1117274 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:32:59.002525 1117274 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:32:59.002720 1117274 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:32:59.002898 1117274 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:32:59.003045 1117274 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:32:59.003189 1117274 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:32:59.078815 1117274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:32:59.091703 1117274 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 7 (610.735078ms)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:33:07.052997 1117419 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:33:07.053263 1117419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:33:07.053272 1117419 out.go:304] Setting ErrFile to fd 2...
	I0731 20:33:07.053277 1117419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:33:07.053441 1117419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:33:07.053597 1117419 out.go:298] Setting JSON to false
	I0731 20:33:07.053631 1117419 mustload.go:65] Loading cluster: ha-430887
	I0731 20:33:07.053656 1117419 notify.go:220] Checking for updates...
	I0731 20:33:07.053996 1117419 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:33:07.054014 1117419 status.go:255] checking status of ha-430887 ...
	I0731 20:33:07.054384 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.054452 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.072035 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I0731 20:33:07.072605 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.073232 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.073256 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.073640 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.073835 1117419 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:33:07.075648 1117419 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:33:07.075677 1117419 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:33:07.076012 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.076070 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.092498 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I0731 20:33:07.092958 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.093548 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.093586 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.093963 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.094189 1117419 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:33:07.097505 1117419 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:33:07.097966 1117419 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:33:07.097999 1117419 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:33:07.098136 1117419 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:33:07.098450 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.098486 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.116394 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0731 20:33:07.116806 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.117328 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.117348 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.117728 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.117945 1117419 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:33:07.118136 1117419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:33:07.118179 1117419 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:33:07.121395 1117419 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:33:07.121947 1117419 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:33:07.121974 1117419 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:33:07.122138 1117419 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:33:07.122327 1117419 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:33:07.122514 1117419 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:33:07.122664 1117419 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:33:07.203070 1117419 ssh_runner.go:195] Run: systemctl --version
	I0731 20:33:07.208739 1117419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:33:07.222899 1117419 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:33:07.222932 1117419 api_server.go:166] Checking apiserver status ...
	I0731 20:33:07.222981 1117419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:33:07.236244 1117419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:33:07.246174 1117419 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:33:07.246247 1117419 ssh_runner.go:195] Run: ls
	I0731 20:33:07.250308 1117419 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:33:07.254773 1117419 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:33:07.254797 1117419 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:33:07.254808 1117419 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:33:07.254835 1117419 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:33:07.255150 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.255193 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.271193 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0731 20:33:07.271705 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.272239 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.272263 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.272618 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.272822 1117419 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:33:07.274324 1117419 status.go:330] ha-430887-m02 host status = "Stopped" (err=<nil>)
	I0731 20:33:07.274341 1117419 status.go:343] host is not running, skipping remaining checks
	I0731 20:33:07.274347 1117419 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:33:07.274363 1117419 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:33:07.274679 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.274740 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.290034 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0731 20:33:07.290474 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.291009 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.291038 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.291362 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.291578 1117419 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:33:07.293285 1117419 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:33:07.293302 1117419 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:33:07.293705 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.293747 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.309708 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46497
	I0731 20:33:07.310129 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.310659 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.310683 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.311038 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.311264 1117419 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:33:07.314278 1117419 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:07.314703 1117419 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:33:07.314733 1117419 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:07.314892 1117419 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:33:07.315221 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.315258 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.332035 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I0731 20:33:07.332533 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.333024 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.333045 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.333382 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.333606 1117419 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:33:07.333828 1117419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:33:07.333855 1117419 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:33:07.336632 1117419 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:07.337033 1117419 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:33:07.337065 1117419 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:07.337231 1117419 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:33:07.337396 1117419 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:33:07.337549 1117419 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:33:07.337717 1117419 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:33:07.414882 1117419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:33:07.428877 1117419 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:33:07.428909 1117419 api_server.go:166] Checking apiserver status ...
	I0731 20:33:07.428949 1117419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:33:07.442019 1117419 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:33:07.450788 1117419 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:33:07.450848 1117419 ssh_runner.go:195] Run: ls
	I0731 20:33:07.454605 1117419 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:33:07.460525 1117419 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:33:07.460547 1117419 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:33:07.460556 1117419 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:33:07.460572 1117419 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:33:07.460980 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.461009 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.476181 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0731 20:33:07.476656 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.477182 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.477205 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.477486 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.477709 1117419 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:33:07.479334 1117419 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:33:07.479350 1117419 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:33:07.479733 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.479767 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.494697 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0731 20:33:07.495161 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.495623 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.495645 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.495953 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.496159 1117419 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:33:07.498602 1117419 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:07.499047 1117419 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:33:07.499075 1117419 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:07.499177 1117419 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:33:07.499498 1117419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:07.499523 1117419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:07.514633 1117419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0731 20:33:07.515037 1117419 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:07.515586 1117419 main.go:141] libmachine: Using API Version  1
	I0731 20:33:07.515618 1117419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:07.515931 1117419 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:07.516168 1117419 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:33:07.516352 1117419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:33:07.516371 1117419 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:33:07.519114 1117419 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:07.519519 1117419 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:33:07.519555 1117419 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:07.519708 1117419 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:33:07.519887 1117419 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:33:07.520062 1117419 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:33:07.520220 1117419 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:33:07.599074 1117419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:33:07.612233 1117419 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 7 (603.766714ms)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-430887-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:33:15.146285 1117523 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:33:15.146604 1117523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:33:15.146615 1117523 out.go:304] Setting ErrFile to fd 2...
	I0731 20:33:15.146620 1117523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:33:15.146807 1117523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:33:15.146972 1117523 out.go:298] Setting JSON to false
	I0731 20:33:15.147001 1117523 mustload.go:65] Loading cluster: ha-430887
	I0731 20:33:15.147131 1117523 notify.go:220] Checking for updates...
	I0731 20:33:15.147415 1117523 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:33:15.147434 1117523 status.go:255] checking status of ha-430887 ...
	I0731 20:33:15.147904 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.147991 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.167541 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I0731 20:33:15.168047 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.168649 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.168667 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.169132 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.169394 1117523 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:33:15.171215 1117523 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:33:15.171242 1117523 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:33:15.171552 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.171601 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.187299 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0731 20:33:15.187745 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.188340 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.188369 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.188710 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.188909 1117523 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:33:15.192238 1117523 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:33:15.192619 1117523 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:33:15.192653 1117523 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:33:15.192790 1117523 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:33:15.193195 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.193236 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.210670 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I0731 20:33:15.211066 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.211592 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.211615 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.211922 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.212165 1117523 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:33:15.212409 1117523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:33:15.212453 1117523 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:33:15.215545 1117523 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:33:15.216076 1117523 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:33:15.216112 1117523 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:33:15.216249 1117523 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:33:15.216434 1117523 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:33:15.216618 1117523 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:33:15.216743 1117523 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:33:15.294898 1117523 ssh_runner.go:195] Run: systemctl --version
	I0731 20:33:15.300359 1117523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:33:15.313968 1117523 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:33:15.313998 1117523 api_server.go:166] Checking apiserver status ...
	I0731 20:33:15.314034 1117523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:33:15.326513 1117523 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0731 20:33:15.334982 1117523 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:33:15.335035 1117523 ssh_runner.go:195] Run: ls
	I0731 20:33:15.339154 1117523 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:33:15.343057 1117523 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:33:15.343079 1117523 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:33:15.343089 1117523 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:33:15.343113 1117523 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:33:15.343455 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.343481 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.359665 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33207
	I0731 20:33:15.360074 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.360530 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.360563 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.360891 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.361117 1117523 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:33:15.362556 1117523 status.go:330] ha-430887-m02 host status = "Stopped" (err=<nil>)
	I0731 20:33:15.362572 1117523 status.go:343] host is not running, skipping remaining checks
	I0731 20:33:15.362580 1117523 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:33:15.362611 1117523 status.go:255] checking status of ha-430887-m03 ...
	I0731 20:33:15.362906 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.362934 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.378639 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0731 20:33:15.379115 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.379634 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.379657 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.379971 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.380166 1117523 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:33:15.381648 1117523 status.go:330] ha-430887-m03 host status = "Running" (err=<nil>)
	I0731 20:33:15.381669 1117523 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:33:15.381960 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.381984 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.397319 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0731 20:33:15.397720 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.398156 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.398181 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.398489 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.398685 1117523 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:33:15.401333 1117523 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:15.401833 1117523 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:33:15.401868 1117523 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:15.402001 1117523 host.go:66] Checking if "ha-430887-m03" exists ...
	I0731 20:33:15.402411 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.402457 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.417447 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33043
	I0731 20:33:15.417929 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.418414 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.418438 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.418738 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.418891 1117523 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:33:15.419057 1117523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:33:15.419082 1117523 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:33:15.421765 1117523 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:15.422156 1117523 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:33:15.422180 1117523 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:15.422323 1117523 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:33:15.422477 1117523 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:33:15.422627 1117523 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:33:15.422751 1117523 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:33:15.502969 1117523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:33:15.520706 1117523 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:33:15.520738 1117523 api_server.go:166] Checking apiserver status ...
	I0731 20:33:15.520782 1117523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:33:15.534569 1117523 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup
	W0731 20:33:15.543998 1117523 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1543/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:33:15.544046 1117523 ssh_runner.go:195] Run: ls
	I0731 20:33:15.548199 1117523 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:33:15.552078 1117523 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:33:15.552121 1117523 status.go:422] ha-430887-m03 apiserver status = Running (err=<nil>)
	I0731 20:33:15.552133 1117523 status.go:257] ha-430887-m03 status: &{Name:ha-430887-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:33:15.552154 1117523 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:33:15.552440 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.552468 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.568187 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38733
	I0731 20:33:15.568637 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.569159 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.569179 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.569555 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.569797 1117523 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:33:15.571463 1117523 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:33:15.571478 1117523 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:33:15.571894 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.571930 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.587098 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0731 20:33:15.587572 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.588027 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.588052 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.588375 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.588585 1117523 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:33:15.591528 1117523 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:15.591977 1117523 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:33:15.592008 1117523 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:15.592193 1117523 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:33:15.592470 1117523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:15.592503 1117523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:15.606963 1117523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0731 20:33:15.607370 1117523 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:15.607825 1117523 main.go:141] libmachine: Using API Version  1
	I0731 20:33:15.607847 1117523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:15.608143 1117523 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:15.608325 1117523 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:33:15.608495 1117523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:33:15.608514 1117523 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:33:15.611067 1117523 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:15.611495 1117523 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:33:15.611522 1117523 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:15.611663 1117523 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:33:15.611831 1117523 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:33:15.612004 1117523 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:33:15.612206 1117523 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:33:15.690351 1117523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:33:15.703355 1117523 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430887 -n ha-430887
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-430887 logs -n 25: (1.328235134s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887:/home/docker/cp-test_ha-430887-m03_ha-430887.txt                       |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887 sudo cat                                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887.txt                                 |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m02:/home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m02 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04:/home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m04 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp testdata/cp-test.txt                                                | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887:/home/docker/cp-test_ha-430887-m04_ha-430887.txt                       |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887 sudo cat                                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887.txt                                 |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m02:/home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m02 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03:/home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m03 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-430887 node stop m02 -v=7                                                     | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-430887 node start m02 -v=7                                                    | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:25:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:25:18.910914 1111910 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:25:18.911204 1111910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:25:18.911214 1111910 out.go:304] Setting ErrFile to fd 2...
	I0731 20:25:18.911219 1111910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:25:18.911425 1111910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:25:18.912044 1111910 out.go:298] Setting JSON to false
	I0731 20:25:18.913045 1111910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14870,"bootTime":1722442649,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:25:18.913112 1111910 start.go:139] virtualization: kvm guest
	I0731 20:25:18.915390 1111910 out.go:177] * [ha-430887] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:25:18.916792 1111910 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:25:18.916791 1111910 notify.go:220] Checking for updates...
	I0731 20:25:18.919661 1111910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:25:18.921153 1111910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:25:18.922508 1111910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:25:18.923770 1111910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:25:18.925289 1111910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:25:18.926887 1111910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:25:18.962913 1111910 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 20:25:18.964226 1111910 start.go:297] selected driver: kvm2
	I0731 20:25:18.964238 1111910 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:25:18.964249 1111910 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:25:18.965062 1111910 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:25:18.965145 1111910 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:25:18.980874 1111910 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:25:18.980962 1111910 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 20:25:18.981255 1111910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:25:18.981311 1111910 cni.go:84] Creating CNI manager for ""
	I0731 20:25:18.981329 1111910 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 20:25:18.981339 1111910 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 20:25:18.981451 1111910 start.go:340] cluster config:
	{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 20:25:18.981584 1111910 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:25:18.984220 1111910 out.go:177] * Starting "ha-430887" primary control-plane node in "ha-430887" cluster
	I0731 20:25:18.985418 1111910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:25:18.985463 1111910 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:25:18.985477 1111910 cache.go:56] Caching tarball of preloaded images
	I0731 20:25:18.985588 1111910 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:25:18.985601 1111910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:25:18.986022 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:25:18.986056 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json: {Name:mk4dcae038756b36a484940a0ad4406989974a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:18.986231 1111910 start.go:360] acquireMachinesLock for ha-430887: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:25:18.986278 1111910 start.go:364] duration metric: took 27.698µs to acquireMachinesLock for "ha-430887"
	I0731 20:25:18.986302 1111910 start.go:93] Provisioning new machine with config: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:25:18.986392 1111910 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 20:25:18.988702 1111910 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 20:25:18.988867 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:25:18.988911 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:25:19.004001 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0731 20:25:19.004605 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:25:19.005159 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:25:19.005178 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:25:19.005626 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:25:19.005789 1111910 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:25:19.005966 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:19.006133 1111910 start.go:159] libmachine.API.Create for "ha-430887" (driver="kvm2")
	I0731 20:25:19.006177 1111910 client.go:168] LocalClient.Create starting
	I0731 20:25:19.006217 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 20:25:19.006251 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:25:19.006269 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:25:19.006325 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 20:25:19.006352 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:25:19.006371 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:25:19.006392 1111910 main.go:141] libmachine: Running pre-create checks...
	I0731 20:25:19.006404 1111910 main.go:141] libmachine: (ha-430887) Calling .PreCreateCheck
	I0731 20:25:19.006715 1111910 main.go:141] libmachine: (ha-430887) Calling .GetConfigRaw
	I0731 20:25:19.007118 1111910 main.go:141] libmachine: Creating machine...
	I0731 20:25:19.007136 1111910 main.go:141] libmachine: (ha-430887) Calling .Create
	I0731 20:25:19.007246 1111910 main.go:141] libmachine: (ha-430887) Creating KVM machine...
	I0731 20:25:19.008638 1111910 main.go:141] libmachine: (ha-430887) DBG | found existing default KVM network
	I0731 20:25:19.009392 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.009254 1111933 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d350}
	I0731 20:25:19.009416 1111910 main.go:141] libmachine: (ha-430887) DBG | created network xml: 
	I0731 20:25:19.009429 1111910 main.go:141] libmachine: (ha-430887) DBG | <network>
	I0731 20:25:19.009436 1111910 main.go:141] libmachine: (ha-430887) DBG |   <name>mk-ha-430887</name>
	I0731 20:25:19.009447 1111910 main.go:141] libmachine: (ha-430887) DBG |   <dns enable='no'/>
	I0731 20:25:19.009456 1111910 main.go:141] libmachine: (ha-430887) DBG |   
	I0731 20:25:19.009467 1111910 main.go:141] libmachine: (ha-430887) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 20:25:19.009478 1111910 main.go:141] libmachine: (ha-430887) DBG |     <dhcp>
	I0731 20:25:19.009503 1111910 main.go:141] libmachine: (ha-430887) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 20:25:19.009538 1111910 main.go:141] libmachine: (ha-430887) DBG |     </dhcp>
	I0731 20:25:19.009553 1111910 main.go:141] libmachine: (ha-430887) DBG |   </ip>
	I0731 20:25:19.009560 1111910 main.go:141] libmachine: (ha-430887) DBG |   
	I0731 20:25:19.009570 1111910 main.go:141] libmachine: (ha-430887) DBG | </network>
	I0731 20:25:19.009578 1111910 main.go:141] libmachine: (ha-430887) DBG | 
	I0731 20:25:19.014449 1111910 main.go:141] libmachine: (ha-430887) DBG | trying to create private KVM network mk-ha-430887 192.168.39.0/24...
	I0731 20:25:19.080321 1111910 main.go:141] libmachine: (ha-430887) DBG | private KVM network mk-ha-430887 192.168.39.0/24 created
	I0731 20:25:19.080363 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.080257 1111933 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:25:19.080379 1111910 main.go:141] libmachine: (ha-430887) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887 ...
	I0731 20:25:19.080397 1111910 main.go:141] libmachine: (ha-430887) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:25:19.080416 1111910 main.go:141] libmachine: (ha-430887) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 20:25:19.367276 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.367138 1111933 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa...
	I0731 20:25:19.586177 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.586061 1111933 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/ha-430887.rawdisk...
	I0731 20:25:19.586206 1111910 main.go:141] libmachine: (ha-430887) DBG | Writing magic tar header
	I0731 20:25:19.586221 1111910 main.go:141] libmachine: (ha-430887) DBG | Writing SSH key tar header
	I0731 20:25:19.586239 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:19.586206 1111933 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887 ...
	I0731 20:25:19.586389 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887
	I0731 20:25:19.586416 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 20:25:19.586428 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887 (perms=drwx------)
	I0731 20:25:19.586439 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:25:19.586449 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 20:25:19.586461 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 20:25:19.586470 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:25:19.586483 1111910 main.go:141] libmachine: (ha-430887) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:25:19.586491 1111910 main.go:141] libmachine: (ha-430887) Creating domain...
	I0731 20:25:19.586502 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:25:19.586518 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 20:25:19.586545 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:25:19.586559 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:25:19.586567 1111910 main.go:141] libmachine: (ha-430887) DBG | Checking permissions on dir: /home
	I0731 20:25:19.586602 1111910 main.go:141] libmachine: (ha-430887) DBG | Skipping /home - not owner
	I0731 20:25:19.587616 1111910 main.go:141] libmachine: (ha-430887) define libvirt domain using xml: 
	I0731 20:25:19.587642 1111910 main.go:141] libmachine: (ha-430887) <domain type='kvm'>
	I0731 20:25:19.587650 1111910 main.go:141] libmachine: (ha-430887)   <name>ha-430887</name>
	I0731 20:25:19.587658 1111910 main.go:141] libmachine: (ha-430887)   <memory unit='MiB'>2200</memory>
	I0731 20:25:19.587701 1111910 main.go:141] libmachine: (ha-430887)   <vcpu>2</vcpu>
	I0731 20:25:19.587721 1111910 main.go:141] libmachine: (ha-430887)   <features>
	I0731 20:25:19.587735 1111910 main.go:141] libmachine: (ha-430887)     <acpi/>
	I0731 20:25:19.587744 1111910 main.go:141] libmachine: (ha-430887)     <apic/>
	I0731 20:25:19.587752 1111910 main.go:141] libmachine: (ha-430887)     <pae/>
	I0731 20:25:19.587764 1111910 main.go:141] libmachine: (ha-430887)     
	I0731 20:25:19.587773 1111910 main.go:141] libmachine: (ha-430887)   </features>
	I0731 20:25:19.587783 1111910 main.go:141] libmachine: (ha-430887)   <cpu mode='host-passthrough'>
	I0731 20:25:19.587791 1111910 main.go:141] libmachine: (ha-430887)   
	I0731 20:25:19.587799 1111910 main.go:141] libmachine: (ha-430887)   </cpu>
	I0731 20:25:19.587818 1111910 main.go:141] libmachine: (ha-430887)   <os>
	I0731 20:25:19.587836 1111910 main.go:141] libmachine: (ha-430887)     <type>hvm</type>
	I0731 20:25:19.587846 1111910 main.go:141] libmachine: (ha-430887)     <boot dev='cdrom'/>
	I0731 20:25:19.587856 1111910 main.go:141] libmachine: (ha-430887)     <boot dev='hd'/>
	I0731 20:25:19.587867 1111910 main.go:141] libmachine: (ha-430887)     <bootmenu enable='no'/>
	I0731 20:25:19.587886 1111910 main.go:141] libmachine: (ha-430887)   </os>
	I0731 20:25:19.587896 1111910 main.go:141] libmachine: (ha-430887)   <devices>
	I0731 20:25:19.587908 1111910 main.go:141] libmachine: (ha-430887)     <disk type='file' device='cdrom'>
	I0731 20:25:19.587923 1111910 main.go:141] libmachine: (ha-430887)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/boot2docker.iso'/>
	I0731 20:25:19.587936 1111910 main.go:141] libmachine: (ha-430887)       <target dev='hdc' bus='scsi'/>
	I0731 20:25:19.587946 1111910 main.go:141] libmachine: (ha-430887)       <readonly/>
	I0731 20:25:19.587954 1111910 main.go:141] libmachine: (ha-430887)     </disk>
	I0731 20:25:19.587964 1111910 main.go:141] libmachine: (ha-430887)     <disk type='file' device='disk'>
	I0731 20:25:19.587971 1111910 main.go:141] libmachine: (ha-430887)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:25:19.587981 1111910 main.go:141] libmachine: (ha-430887)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/ha-430887.rawdisk'/>
	I0731 20:25:19.587987 1111910 main.go:141] libmachine: (ha-430887)       <target dev='hda' bus='virtio'/>
	I0731 20:25:19.587997 1111910 main.go:141] libmachine: (ha-430887)     </disk>
	I0731 20:25:19.588010 1111910 main.go:141] libmachine: (ha-430887)     <interface type='network'>
	I0731 20:25:19.588026 1111910 main.go:141] libmachine: (ha-430887)       <source network='mk-ha-430887'/>
	I0731 20:25:19.588038 1111910 main.go:141] libmachine: (ha-430887)       <model type='virtio'/>
	I0731 20:25:19.588049 1111910 main.go:141] libmachine: (ha-430887)     </interface>
	I0731 20:25:19.588058 1111910 main.go:141] libmachine: (ha-430887)     <interface type='network'>
	I0731 20:25:19.588069 1111910 main.go:141] libmachine: (ha-430887)       <source network='default'/>
	I0731 20:25:19.588081 1111910 main.go:141] libmachine: (ha-430887)       <model type='virtio'/>
	I0731 20:25:19.588102 1111910 main.go:141] libmachine: (ha-430887)     </interface>
	I0731 20:25:19.588114 1111910 main.go:141] libmachine: (ha-430887)     <serial type='pty'>
	I0731 20:25:19.588129 1111910 main.go:141] libmachine: (ha-430887)       <target port='0'/>
	I0731 20:25:19.588143 1111910 main.go:141] libmachine: (ha-430887)     </serial>
	I0731 20:25:19.588155 1111910 main.go:141] libmachine: (ha-430887)     <console type='pty'>
	I0731 20:25:19.588168 1111910 main.go:141] libmachine: (ha-430887)       <target type='serial' port='0'/>
	I0731 20:25:19.588179 1111910 main.go:141] libmachine: (ha-430887)     </console>
	I0731 20:25:19.588190 1111910 main.go:141] libmachine: (ha-430887)     <rng model='virtio'>
	I0731 20:25:19.588203 1111910 main.go:141] libmachine: (ha-430887)       <backend model='random'>/dev/random</backend>
	I0731 20:25:19.588216 1111910 main.go:141] libmachine: (ha-430887)     </rng>
	I0731 20:25:19.588227 1111910 main.go:141] libmachine: (ha-430887)     
	I0731 20:25:19.588237 1111910 main.go:141] libmachine: (ha-430887)     
	I0731 20:25:19.588244 1111910 main.go:141] libmachine: (ha-430887)   </devices>
	I0731 20:25:19.588253 1111910 main.go:141] libmachine: (ha-430887) </domain>
	I0731 20:25:19.588263 1111910 main.go:141] libmachine: (ha-430887) 
	I0731 20:25:19.592459 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:4e:c7:83 in network default
	I0731 20:25:19.593045 1111910 main.go:141] libmachine: (ha-430887) Ensuring networks are active...
	I0731 20:25:19.593060 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:19.593738 1111910 main.go:141] libmachine: (ha-430887) Ensuring network default is active
	I0731 20:25:19.594076 1111910 main.go:141] libmachine: (ha-430887) Ensuring network mk-ha-430887 is active
	I0731 20:25:19.594565 1111910 main.go:141] libmachine: (ha-430887) Getting domain xml...
	I0731 20:25:19.595346 1111910 main.go:141] libmachine: (ha-430887) Creating domain...
	I0731 20:25:20.785997 1111910 main.go:141] libmachine: (ha-430887) Waiting to get IP...
	I0731 20:25:20.786882 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:20.787271 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:20.787318 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:20.787268 1111933 retry.go:31] will retry after 288.448441ms: waiting for machine to come up
	I0731 20:25:21.077798 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:21.078186 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:21.078228 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:21.078137 1111933 retry.go:31] will retry after 252.829338ms: waiting for machine to come up
	I0731 20:25:21.332877 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:21.333430 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:21.333451 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:21.333379 1111933 retry.go:31] will retry after 334.800359ms: waiting for machine to come up
	I0731 20:25:21.669873 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:21.670216 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:21.670241 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:21.670168 1111933 retry.go:31] will retry after 472.221199ms: waiting for machine to come up
	I0731 20:25:22.143436 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:22.143930 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:22.143959 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:22.143872 1111933 retry.go:31] will retry after 559.007443ms: waiting for machine to come up
	I0731 20:25:22.704692 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:22.705099 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:22.705130 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:22.705032 1111933 retry.go:31] will retry after 897.504113ms: waiting for machine to come up
	I0731 20:25:23.604024 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:23.604389 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:23.604420 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:23.604347 1111933 retry.go:31] will retry after 1.120126909s: waiting for machine to come up
	I0731 20:25:24.726083 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:24.726625 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:24.726654 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:24.726570 1111933 retry.go:31] will retry after 1.143168622s: waiting for machine to come up
	I0731 20:25:25.870828 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:25.871310 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:25.871342 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:25.871253 1111933 retry.go:31] will retry after 1.606766772s: waiting for machine to come up
	I0731 20:25:27.480277 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:27.480740 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:27.480775 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:27.480678 1111933 retry.go:31] will retry after 1.912815338s: waiting for machine to come up
	I0731 20:25:29.394806 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:29.395236 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:29.395265 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:29.395172 1111933 retry.go:31] will retry after 2.201647109s: waiting for machine to come up
	I0731 20:25:31.599462 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:31.599906 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:31.599936 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:31.599856 1111933 retry.go:31] will retry after 3.569826584s: waiting for machine to come up
	I0731 20:25:35.170903 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:35.171313 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find current IP address of domain ha-430887 in network mk-ha-430887
	I0731 20:25:35.171339 1111910 main.go:141] libmachine: (ha-430887) DBG | I0731 20:25:35.171261 1111933 retry.go:31] will retry after 3.217563206s: waiting for machine to come up
	I0731 20:25:38.392646 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.393130 1111910 main.go:141] libmachine: (ha-430887) Found IP for machine: 192.168.39.195
	I0731 20:25:38.393159 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has current primary IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.393165 1111910 main.go:141] libmachine: (ha-430887) Reserving static IP address...
	I0731 20:25:38.393561 1111910 main.go:141] libmachine: (ha-430887) DBG | unable to find host DHCP lease matching {name: "ha-430887", mac: "52:54:00:10:dc:43", ip: "192.168.39.195"} in network mk-ha-430887
	I0731 20:25:38.468809 1111910 main.go:141] libmachine: (ha-430887) DBG | Getting to WaitForSSH function...
	I0731 20:25:38.468844 1111910 main.go:141] libmachine: (ha-430887) Reserved static IP address: 192.168.39.195
	I0731 20:25:38.468857 1111910 main.go:141] libmachine: (ha-430887) Waiting for SSH to be available...
	I0731 20:25:38.471357 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.471785 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.471816 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.471998 1111910 main.go:141] libmachine: (ha-430887) DBG | Using SSH client type: external
	I0731 20:25:38.472027 1111910 main.go:141] libmachine: (ha-430887) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa (-rw-------)
	I0731 20:25:38.472062 1111910 main.go:141] libmachine: (ha-430887) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:25:38.472079 1111910 main.go:141] libmachine: (ha-430887) DBG | About to run SSH command:
	I0731 20:25:38.472107 1111910 main.go:141] libmachine: (ha-430887) DBG | exit 0
	I0731 20:25:38.595719 1111910 main.go:141] libmachine: (ha-430887) DBG | SSH cmd err, output: <nil>: 
	I0731 20:25:38.595942 1111910 main.go:141] libmachine: (ha-430887) KVM machine creation complete!
	I0731 20:25:38.596288 1111910 main.go:141] libmachine: (ha-430887) Calling .GetConfigRaw
	I0731 20:25:38.596859 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:38.597059 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:38.597195 1111910 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:25:38.597210 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:25:38.598415 1111910 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:25:38.598440 1111910 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:25:38.598448 1111910 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:25:38.598456 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:38.600580 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.600914 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.600936 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.601056 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:38.601245 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.601394 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.601493 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:38.601638 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:38.601836 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:38.601847 1111910 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:25:38.703285 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:25:38.703309 1111910 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:25:38.703316 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:38.706210 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.706585 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.706611 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.706796 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:38.706958 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.707137 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.707252 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:38.707372 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:38.707560 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:38.707571 1111910 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:25:38.808324 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:25:38.808386 1111910 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:25:38.808392 1111910 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:25:38.808400 1111910 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:25:38.808637 1111910 buildroot.go:166] provisioning hostname "ha-430887"
	I0731 20:25:38.808666 1111910 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:25:38.808886 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:38.811473 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.811815 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.811844 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.811959 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:38.812157 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.812313 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.812419 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:38.812597 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:38.812785 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:38.812796 1111910 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430887 && echo "ha-430887" | sudo tee /etc/hostname
	I0731 20:25:38.929052 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887
	
	I0731 20:25:38.929092 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:38.931708 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.932160 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:38.932186 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:38.932293 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:38.932504 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.932676 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:38.932849 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:38.933028 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:38.933254 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:38.933277 1111910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430887/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:25:39.043990 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:25:39.044064 1111910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:25:39.044159 1111910 buildroot.go:174] setting up certificates
	I0731 20:25:39.044173 1111910 provision.go:84] configureAuth start
	I0731 20:25:39.044191 1111910 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:25:39.044484 1111910 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:25:39.047052 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.047439 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.047459 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.047603 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.049597 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.049889 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.049912 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.050034 1111910 provision.go:143] copyHostCerts
	I0731 20:25:39.050061 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:25:39.050093 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:25:39.050103 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:25:39.050190 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:25:39.050311 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:25:39.050338 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:25:39.050347 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:25:39.050385 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:25:39.050462 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:25:39.050500 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:25:39.050509 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:25:39.050563 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:25:39.050673 1111910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.ha-430887 san=[127.0.0.1 192.168.39.195 ha-430887 localhost minikube]
	I0731 20:25:39.123742 1111910 provision.go:177] copyRemoteCerts
	I0731 20:25:39.123801 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:25:39.123836 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.126665 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.126997 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.127017 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.127285 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.127500 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.127702 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.127861 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:25:39.209849 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:25:39.209931 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:25:39.231847 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:25:39.231909 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:25:39.252992 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:25:39.253063 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 20:25:39.274174 1111910 provision.go:87] duration metric: took 229.983854ms to configureAuth
	I0731 20:25:39.274202 1111910 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:25:39.274892 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:25:39.275044 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.278157 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.278558 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.278589 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.278753 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.278935 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.279129 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.279256 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.279428 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:39.279625 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:39.279648 1111910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:25:39.527070 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:25:39.527102 1111910 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:25:39.527109 1111910 main.go:141] libmachine: (ha-430887) Calling .GetURL
	I0731 20:25:39.528415 1111910 main.go:141] libmachine: (ha-430887) DBG | Using libvirt version 6000000
	I0731 20:25:39.530372 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.530721 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.530752 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.530904 1111910 main.go:141] libmachine: Docker is up and running!
	I0731 20:25:39.530918 1111910 main.go:141] libmachine: Reticulating splines...
	I0731 20:25:39.530927 1111910 client.go:171] duration metric: took 20.524737988s to LocalClient.Create
	I0731 20:25:39.530959 1111910 start.go:167] duration metric: took 20.524828329s to libmachine.API.Create "ha-430887"
	I0731 20:25:39.530972 1111910 start.go:293] postStartSetup for "ha-430887" (driver="kvm2")
	I0731 20:25:39.530986 1111910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:25:39.531010 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.531239 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:25:39.531265 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.533320 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.533614 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.533634 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.533814 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.533988 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.534184 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.534321 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:25:39.613502 1111910 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:25:39.617335 1111910 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:25:39.617362 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:25:39.617443 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:25:39.617533 1111910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 20:25:39.617546 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 20:25:39.617665 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:25:39.626250 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:25:39.647722 1111910 start.go:296] duration metric: took 116.738226ms for postStartSetup
	I0731 20:25:39.647774 1111910 main.go:141] libmachine: (ha-430887) Calling .GetConfigRaw
	I0731 20:25:39.648411 1111910 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:25:39.651097 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.651544 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.651571 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.651785 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:25:39.651981 1111910 start.go:128] duration metric: took 20.665577325s to createHost
	I0731 20:25:39.652024 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.654259 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.654574 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.654607 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.654687 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.654874 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.655060 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.655184 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.655346 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:25:39.655517 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:25:39.655527 1111910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:25:39.756417 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722457539.730567352
	
	I0731 20:25:39.756440 1111910 fix.go:216] guest clock: 1722457539.730567352
	I0731 20:25:39.756449 1111910 fix.go:229] Guest: 2024-07-31 20:25:39.730567352 +0000 UTC Remote: 2024-07-31 20:25:39.651994642 +0000 UTC m=+20.776148366 (delta=78.57271ms)
	I0731 20:25:39.756492 1111910 fix.go:200] guest clock delta is within tolerance: 78.57271ms
	I0731 20:25:39.756498 1111910 start.go:83] releasing machines lock for "ha-430887", held for 20.77020991s
	I0731 20:25:39.756520 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.756840 1111910 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:25:39.760054 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.760454 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.760481 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.760625 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.761109 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.761293 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:25:39.761391 1111910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:25:39.761444 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.761512 1111910 ssh_runner.go:195] Run: cat /version.json
	I0731 20:25:39.761522 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:25:39.763886 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.764219 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.764248 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.764315 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.764392 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.764583 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.764723 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.764764 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:39.764787 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:39.764871 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:25:39.764971 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:25:39.765117 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:25:39.765272 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:25:39.765439 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:25:39.860419 1111910 ssh_runner.go:195] Run: systemctl --version
	I0731 20:25:39.865997 1111910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:25:40.025196 1111910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:25:40.030535 1111910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:25:40.030610 1111910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:25:40.045476 1111910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:25:40.045510 1111910 start.go:495] detecting cgroup driver to use...
	I0731 20:25:40.045636 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:25:40.060533 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:25:40.073990 1111910 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:25:40.074048 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:25:40.086497 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:25:40.098909 1111910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:25:40.205257 1111910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:25:40.340341 1111910 docker.go:233] disabling docker service ...
	I0731 20:25:40.340439 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:25:40.360998 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:25:40.373434 1111910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:25:40.505802 1111910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:25:40.620887 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:25:40.633833 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:25:40.650441 1111910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:25:40.650505 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.659345 1111910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:25:40.659437 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.668428 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.677102 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.686086 1111910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:25:40.695645 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.704885 1111910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.720254 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:25:40.729175 1111910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:25:40.737141 1111910 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:25:40.737200 1111910 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:25:40.747929 1111910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:25:40.756170 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:25:40.867506 1111910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:25:40.990360 1111910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:25:40.990446 1111910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:25:40.994699 1111910 start.go:563] Will wait 60s for crictl version
	I0731 20:25:40.994769 1111910 ssh_runner.go:195] Run: which crictl
	I0731 20:25:40.998197 1111910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:25:41.030740 1111910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:25:41.030869 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:25:41.055604 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:25:41.082034 1111910 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:25:41.083362 1111910 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:25:41.085829 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:41.086170 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:25:41.086203 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:25:41.086386 1111910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:25:41.090040 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:25:41.101721 1111910 kubeadm.go:883] updating cluster {Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:25:41.101842 1111910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:25:41.101889 1111910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:25:41.131420 1111910 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:25:41.131504 1111910 ssh_runner.go:195] Run: which lz4
	I0731 20:25:41.135133 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 20:25:41.135241 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:25:41.138935 1111910 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:25:41.138970 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:25:42.318932 1111910 crio.go:462] duration metric: took 1.183725382s to copy over tarball
	I0731 20:25:42.319014 1111910 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:25:44.347688 1111910 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.028636866s)
	I0731 20:25:44.347725 1111910 crio.go:469] duration metric: took 2.028760944s to extract the tarball
	I0731 20:25:44.347736 1111910 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:25:44.383939 1111910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:25:44.426129 1111910 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:25:44.426153 1111910 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:25:44.426162 1111910 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.30.3 crio true true} ...
	I0731 20:25:44.426273 1111910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-430887 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:25:44.426346 1111910 ssh_runner.go:195] Run: crio config
	I0731 20:25:44.467760 1111910 cni.go:84] Creating CNI manager for ""
	I0731 20:25:44.467783 1111910 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 20:25:44.467793 1111910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:25:44.467815 1111910 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430887 NodeName:ha-430887 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:25:44.467970 1111910 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430887"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:25:44.467998 1111910 kube-vip.go:115] generating kube-vip config ...
	I0731 20:25:44.468043 1111910 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 20:25:44.482515 1111910 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 20:25:44.482631 1111910 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 20:25:44.482689 1111910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:25:44.491723 1111910 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:25:44.491791 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 20:25:44.500247 1111910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 20:25:44.514707 1111910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:25:44.528884 1111910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 20:25:44.543184 1111910 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 20:25:44.557714 1111910 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 20:25:44.561008 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:25:44.571667 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:25:44.684801 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:25:44.700340 1111910 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887 for IP: 192.168.39.195
	I0731 20:25:44.700373 1111910 certs.go:194] generating shared ca certs ...
	I0731 20:25:44.700398 1111910 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:44.700614 1111910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:25:44.700679 1111910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:25:44.700692 1111910 certs.go:256] generating profile certs ...
	I0731 20:25:44.700768 1111910 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key
	I0731 20:25:44.700789 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt with IP's: []
	I0731 20:25:44.916462 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt ...
	I0731 20:25:44.916496 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt: {Name:mkd3b433aa6ef2fdcaf6e733c05cf9b7b64071b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:44.916711 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key ...
	I0731 20:25:44.916727 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key: {Name:mke53210658faf7d54674a82834fe27cbb53cd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:44.916857 1111910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.ee5e13cf
	I0731 20:25:44.916880 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.ee5e13cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.254]
	I0731 20:25:45.051228 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.ee5e13cf ...
	I0731 20:25:45.051264 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.ee5e13cf: {Name:mk06a05e571b29664204fa70b015d5d5754cbff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:45.051464 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.ee5e13cf ...
	I0731 20:25:45.051483 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.ee5e13cf: {Name:mk8374603a62e3418a1af38d213a37a82028883f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:45.051600 1111910 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.ee5e13cf -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt
	I0731 20:25:45.051685 1111910 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.ee5e13cf -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key
	I0731 20:25:45.051740 1111910 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key
	I0731 20:25:45.051755 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt with IP's: []
	I0731 20:25:45.291071 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt ...
	I0731 20:25:45.291105 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt: {Name:mkfa0436e509266f42d4575db891252e0ff63705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:45.291301 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key ...
	I0731 20:25:45.291315 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key: {Name:mk0dd3fcece20ff7bede948336cf8b1df95f7897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:25:45.291419 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:25:45.291439 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:25:45.291450 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:25:45.291464 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:25:45.291476 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:25:45.291489 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:25:45.291501 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:25:45.291512 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:25:45.291563 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 20:25:45.291608 1111910 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 20:25:45.291619 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:25:45.291642 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:25:45.291696 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:25:45.291727 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:25:45.291767 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:25:45.291795 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:25:45.291808 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 20:25:45.291821 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 20:25:45.292393 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:25:45.315526 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:25:45.336674 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:25:45.358096 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:25:45.379514 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:25:45.400799 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:25:45.421664 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:25:45.444971 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:25:45.473815 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:25:45.509793 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 20:25:45.535602 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 20:25:45.558953 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:25:45.575357 1111910 ssh_runner.go:195] Run: openssl version
	I0731 20:25:45.580647 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 20:25:45.591849 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 20:25:45.596010 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 20:25:45.596054 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 20:25:45.601397 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:25:45.611106 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:25:45.620611 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:25:45.624539 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:25:45.624575 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:25:45.629427 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:25:45.639303 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 20:25:45.648962 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 20:25:45.652748 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 20:25:45.652792 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 20:25:45.657795 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 20:25:45.667454 1111910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:25:45.671010 1111910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:25:45.671063 1111910 kubeadm.go:392] StartCluster: {Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:25:45.671143 1111910 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:25:45.671192 1111910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:25:45.704543 1111910 cri.go:89] found id: ""
	I0731 20:25:45.704632 1111910 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:25:45.714330 1111910 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:25:45.723384 1111910 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:25:45.732027 1111910 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:25:45.732047 1111910 kubeadm.go:157] found existing configuration files:
	
	I0731 20:25:45.732104 1111910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:25:45.740154 1111910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:25:45.740220 1111910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:25:45.748745 1111910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:25:45.756896 1111910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:25:45.756956 1111910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:25:45.765270 1111910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:25:45.773288 1111910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:25:45.773335 1111910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:25:45.781940 1111910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:25:45.789961 1111910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:25:45.790020 1111910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:25:45.798408 1111910 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 20:25:46.002590 1111910 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 20:25:57.171147 1111910 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 20:25:57.171233 1111910 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 20:25:57.171348 1111910 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 20:25:57.171508 1111910 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 20:25:57.171623 1111910 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 20:25:57.171691 1111910 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 20:25:57.173230 1111910 out.go:204]   - Generating certificates and keys ...
	I0731 20:25:57.173293 1111910 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 20:25:57.173350 1111910 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 20:25:57.173436 1111910 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 20:25:57.173492 1111910 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 20:25:57.173542 1111910 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 20:25:57.173585 1111910 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 20:25:57.173630 1111910 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 20:25:57.173745 1111910 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-430887 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0731 20:25:57.173789 1111910 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 20:25:57.173926 1111910 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-430887 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0731 20:25:57.174025 1111910 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 20:25:57.174120 1111910 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 20:25:57.174196 1111910 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 20:25:57.174279 1111910 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 20:25:57.174344 1111910 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 20:25:57.174420 1111910 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 20:25:57.174496 1111910 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 20:25:57.174593 1111910 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 20:25:57.174644 1111910 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 20:25:57.174730 1111910 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 20:25:57.174837 1111910 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 20:25:57.176268 1111910 out.go:204]   - Booting up control plane ...
	I0731 20:25:57.176360 1111910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 20:25:57.176429 1111910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 20:25:57.176484 1111910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 20:25:57.176580 1111910 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 20:25:57.176668 1111910 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 20:25:57.176702 1111910 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 20:25:57.176809 1111910 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 20:25:57.176906 1111910 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 20:25:57.177004 1111910 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.56619ms
	I0731 20:25:57.177067 1111910 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 20:25:57.177122 1111910 kubeadm.go:310] [api-check] The API server is healthy after 6.124767423s
	I0731 20:25:57.177206 1111910 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 20:25:57.177315 1111910 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 20:25:57.177401 1111910 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 20:25:57.177580 1111910 kubeadm.go:310] [mark-control-plane] Marking the node ha-430887 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 20:25:57.177630 1111910 kubeadm.go:310] [bootstrap-token] Using token: tzik02.6j5yn2d1mg1f7i4r
	I0731 20:25:57.178808 1111910 out.go:204]   - Configuring RBAC rules ...
	I0731 20:25:57.178901 1111910 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 20:25:57.178969 1111910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 20:25:57.179085 1111910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 20:25:57.179188 1111910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 20:25:57.179295 1111910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 20:25:57.179380 1111910 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 20:25:57.179476 1111910 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 20:25:57.179514 1111910 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 20:25:57.179558 1111910 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 20:25:57.179564 1111910 kubeadm.go:310] 
	I0731 20:25:57.179614 1111910 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 20:25:57.179623 1111910 kubeadm.go:310] 
	I0731 20:25:57.179688 1111910 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 20:25:57.179697 1111910 kubeadm.go:310] 
	I0731 20:25:57.179727 1111910 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 20:25:57.179777 1111910 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 20:25:57.179819 1111910 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 20:25:57.179830 1111910 kubeadm.go:310] 
	I0731 20:25:57.179878 1111910 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 20:25:57.179884 1111910 kubeadm.go:310] 
	I0731 20:25:57.179928 1111910 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 20:25:57.179934 1111910 kubeadm.go:310] 
	I0731 20:25:57.179977 1111910 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 20:25:57.180045 1111910 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 20:25:57.180122 1111910 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 20:25:57.180133 1111910 kubeadm.go:310] 
	I0731 20:25:57.180202 1111910 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 20:25:57.180317 1111910 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 20:25:57.180331 1111910 kubeadm.go:310] 
	I0731 20:25:57.180441 1111910 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tzik02.6j5yn2d1mg1f7i4r \
	I0731 20:25:57.180562 1111910 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 20:25:57.180585 1111910 kubeadm.go:310] 	--control-plane 
	I0731 20:25:57.180591 1111910 kubeadm.go:310] 
	I0731 20:25:57.180662 1111910 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 20:25:57.180669 1111910 kubeadm.go:310] 
	I0731 20:25:57.180746 1111910 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tzik02.6j5yn2d1mg1f7i4r \
	I0731 20:25:57.180850 1111910 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 20:25:57.180867 1111910 cni.go:84] Creating CNI manager for ""
	I0731 20:25:57.180876 1111910 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 20:25:57.182379 1111910 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 20:25:57.183560 1111910 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 20:25:57.188768 1111910 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 20:25:57.188785 1111910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0731 20:25:57.208766 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 20:25:57.549116 1111910 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:25:57.549195 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:57.549229 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-430887 minikube.k8s.io/updated_at=2024_07_31T20_25_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=ha-430887 minikube.k8s.io/primary=true
	I0731 20:25:57.705734 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:57.710642 1111910 ops.go:34] apiserver oom_adj: -16
	I0731 20:25:58.205724 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:58.706404 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:59.206744 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:25:59.705751 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:00.205902 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:00.705972 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:01.205739 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:01.705881 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:02.206605 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:02.705761 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:03.205897 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:03.706124 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:04.206228 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:04.705877 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:05.205810 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:05.706578 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:06.206135 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:06.705686 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:07.205883 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:07.706190 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:08.206737 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:08.706316 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:09.206116 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:09.706075 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:10.205939 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:10.706353 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:11.206096 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:11.706733 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 20:26:11.814485 1111910 kubeadm.go:1113] duration metric: took 14.265357492s to wait for elevateKubeSystemPrivileges
	I0731 20:26:11.814529 1111910 kubeadm.go:394] duration metric: took 26.143472383s to StartCluster
	I0731 20:26:11.814548 1111910 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:11.814642 1111910 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:26:11.815550 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:11.815810 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 20:26:11.815812 1111910 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:26:11.815838 1111910 start.go:241] waiting for startup goroutines ...
	I0731 20:26:11.815855 1111910 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:26:11.815924 1111910 addons.go:69] Setting storage-provisioner=true in profile "ha-430887"
	I0731 20:26:11.815951 1111910 addons.go:69] Setting default-storageclass=true in profile "ha-430887"
	I0731 20:26:11.816007 1111910 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-430887"
	I0731 20:26:11.815959 1111910 addons.go:234] Setting addon storage-provisioner=true in "ha-430887"
	I0731 20:26:11.816078 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:26:11.816121 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:26:11.816452 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.816461 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.816483 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.816486 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.832234 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35407
	I0731 20:26:11.832298 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0731 20:26:11.832752 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.832784 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.833284 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.833295 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.833309 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.833318 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.833654 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.833681 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.833862 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:26:11.834196 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.834221 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.836583 1111910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:26:11.836928 1111910 kapi.go:59] client config for ha-430887: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt", KeyFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key", CAFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 20:26:11.837460 1111910 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 20:26:11.837754 1111910 addons.go:234] Setting addon default-storageclass=true in "ha-430887"
	I0731 20:26:11.837806 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:26:11.838191 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.838226 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.849865 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34029
	I0731 20:26:11.850439 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.850975 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.851004 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.851383 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.851595 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:26:11.853341 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:26:11.855983 1111910 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:26:11.856785 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
	I0731 20:26:11.857211 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.857321 1111910 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:26:11.857341 1111910 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:26:11.857362 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:26:11.857747 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.857767 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.858094 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.858683 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:11.858727 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:11.860796 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:11.861302 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:26:11.861352 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:11.861643 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:26:11.861813 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:26:11.861997 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:26:11.862127 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:26:11.874601 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45765
	I0731 20:26:11.875023 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:11.875679 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:11.875702 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:11.876130 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:11.876335 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:26:11.878191 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:26:11.878427 1111910 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:26:11.878443 1111910 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:26:11.878458 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:26:11.881350 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:11.881786 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:26:11.881810 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:11.882057 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:26:11.882231 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:26:11.882396 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:26:11.882530 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:26:11.918245 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 20:26:12.021622 1111910 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:26:12.051312 1111910 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:26:12.331501 1111910 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 20:26:12.440968 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.440996 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.441358 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.441382 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.441402 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.441404 1111910 main.go:141] libmachine: (ha-430887) DBG | Closing plugin on server side
	I0731 20:26:12.441416 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.441688 1111910 main.go:141] libmachine: (ha-430887) DBG | Closing plugin on server side
	I0731 20:26:12.441750 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.441776 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.441908 1111910 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 20:26:12.441919 1111910 round_trippers.go:469] Request Headers:
	I0731 20:26:12.441929 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:26:12.441937 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:26:12.455968 1111910 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 20:26:12.456876 1111910 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 20:26:12.456896 1111910 round_trippers.go:469] Request Headers:
	I0731 20:26:12.456909 1111910 round_trippers.go:473]     Content-Type: application/json
	I0731 20:26:12.456919 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:26:12.456928 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:26:12.463864 1111910 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 20:26:12.464077 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.464106 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.464446 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.464466 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.623094 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.623118 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.623466 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.623486 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.623502 1111910 main.go:141] libmachine: Making call to close driver server
	I0731 20:26:12.623512 1111910 main.go:141] libmachine: (ha-430887) Calling .Close
	I0731 20:26:12.624198 1111910 main.go:141] libmachine: (ha-430887) DBG | Closing plugin on server side
	I0731 20:26:12.624218 1111910 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:26:12.624233 1111910 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:26:12.625936 1111910 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0731 20:26:12.627141 1111910 addons.go:510] duration metric: took 811.286894ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0731 20:26:12.627189 1111910 start.go:246] waiting for cluster config update ...
	I0731 20:26:12.627205 1111910 start.go:255] writing updated cluster config ...
	I0731 20:26:12.628783 1111910 out.go:177] 
	I0731 20:26:12.630067 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:26:12.630201 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:26:12.631737 1111910 out.go:177] * Starting "ha-430887-m02" control-plane node in "ha-430887" cluster
	I0731 20:26:12.633162 1111910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:26:12.633190 1111910 cache.go:56] Caching tarball of preloaded images
	I0731 20:26:12.633287 1111910 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:26:12.633305 1111910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:26:12.633364 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:26:12.633652 1111910 start.go:360] acquireMachinesLock for ha-430887-m02: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:26:12.633704 1111910 start.go:364] duration metric: took 29.367µs to acquireMachinesLock for "ha-430887-m02"
	I0731 20:26:12.633734 1111910 start.go:93] Provisioning new machine with config: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:26:12.633803 1111910 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 20:26:12.635315 1111910 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 20:26:12.635397 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:12.635422 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:12.650971 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34673
	I0731 20:26:12.651504 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:12.652061 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:12.652110 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:12.652503 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:12.652715 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetMachineName
	I0731 20:26:12.652869 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:12.653025 1111910 start.go:159] libmachine.API.Create for "ha-430887" (driver="kvm2")
	I0731 20:26:12.653054 1111910 client.go:168] LocalClient.Create starting
	I0731 20:26:12.653091 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 20:26:12.653134 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:26:12.653155 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:26:12.653236 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 20:26:12.653269 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:26:12.653286 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:26:12.653310 1111910 main.go:141] libmachine: Running pre-create checks...
	I0731 20:26:12.653321 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .PreCreateCheck
	I0731 20:26:12.653535 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetConfigRaw
	I0731 20:26:12.653996 1111910 main.go:141] libmachine: Creating machine...
	I0731 20:26:12.654017 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .Create
	I0731 20:26:12.654210 1111910 main.go:141] libmachine: (ha-430887-m02) Creating KVM machine...
	I0731 20:26:12.655537 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found existing default KVM network
	I0731 20:26:12.655682 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found existing private KVM network mk-ha-430887
	I0731 20:26:12.655842 1111910 main.go:141] libmachine: (ha-430887-m02) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02 ...
	I0731 20:26:12.655869 1111910 main.go:141] libmachine: (ha-430887-m02) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:26:12.655944 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:12.655833 1112279 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:26:12.656065 1111910 main.go:141] libmachine: (ha-430887-m02) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 20:26:12.917937 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:12.917783 1112279 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa...
	I0731 20:26:13.216991 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:13.216842 1112279 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/ha-430887-m02.rawdisk...
	I0731 20:26:13.217040 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Writing magic tar header
	I0731 20:26:13.217051 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Writing SSH key tar header
	I0731 20:26:13.217059 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:13.216956 1112279 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02 ...
	I0731 20:26:13.217155 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02 (perms=drwx------)
	I0731 20:26:13.217180 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:26:13.217194 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02
	I0731 20:26:13.217209 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 20:26:13.217225 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 20:26:13.217238 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:26:13.217249 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 20:26:13.217261 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:26:13.217279 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 20:26:13.217291 1111910 main.go:141] libmachine: (ha-430887-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:26:13.217306 1111910 main.go:141] libmachine: (ha-430887-m02) Creating domain...
	I0731 20:26:13.217319 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:26:13.217325 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:26:13.217333 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Checking permissions on dir: /home
	I0731 20:26:13.217342 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Skipping /home - not owner
	I0731 20:26:13.218273 1111910 main.go:141] libmachine: (ha-430887-m02) define libvirt domain using xml: 
	I0731 20:26:13.218300 1111910 main.go:141] libmachine: (ha-430887-m02) <domain type='kvm'>
	I0731 20:26:13.218310 1111910 main.go:141] libmachine: (ha-430887-m02)   <name>ha-430887-m02</name>
	I0731 20:26:13.218315 1111910 main.go:141] libmachine: (ha-430887-m02)   <memory unit='MiB'>2200</memory>
	I0731 20:26:13.218321 1111910 main.go:141] libmachine: (ha-430887-m02)   <vcpu>2</vcpu>
	I0731 20:26:13.218326 1111910 main.go:141] libmachine: (ha-430887-m02)   <features>
	I0731 20:26:13.218334 1111910 main.go:141] libmachine: (ha-430887-m02)     <acpi/>
	I0731 20:26:13.218343 1111910 main.go:141] libmachine: (ha-430887-m02)     <apic/>
	I0731 20:26:13.218352 1111910 main.go:141] libmachine: (ha-430887-m02)     <pae/>
	I0731 20:26:13.218362 1111910 main.go:141] libmachine: (ha-430887-m02)     
	I0731 20:26:13.218368 1111910 main.go:141] libmachine: (ha-430887-m02)   </features>
	I0731 20:26:13.218373 1111910 main.go:141] libmachine: (ha-430887-m02)   <cpu mode='host-passthrough'>
	I0731 20:26:13.218378 1111910 main.go:141] libmachine: (ha-430887-m02)   
	I0731 20:26:13.218385 1111910 main.go:141] libmachine: (ha-430887-m02)   </cpu>
	I0731 20:26:13.218391 1111910 main.go:141] libmachine: (ha-430887-m02)   <os>
	I0731 20:26:13.218397 1111910 main.go:141] libmachine: (ha-430887-m02)     <type>hvm</type>
	I0731 20:26:13.218437 1111910 main.go:141] libmachine: (ha-430887-m02)     <boot dev='cdrom'/>
	I0731 20:26:13.218465 1111910 main.go:141] libmachine: (ha-430887-m02)     <boot dev='hd'/>
	I0731 20:26:13.218477 1111910 main.go:141] libmachine: (ha-430887-m02)     <bootmenu enable='no'/>
	I0731 20:26:13.218487 1111910 main.go:141] libmachine: (ha-430887-m02)   </os>
	I0731 20:26:13.218495 1111910 main.go:141] libmachine: (ha-430887-m02)   <devices>
	I0731 20:26:13.218506 1111910 main.go:141] libmachine: (ha-430887-m02)     <disk type='file' device='cdrom'>
	I0731 20:26:13.218522 1111910 main.go:141] libmachine: (ha-430887-m02)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/boot2docker.iso'/>
	I0731 20:26:13.218537 1111910 main.go:141] libmachine: (ha-430887-m02)       <target dev='hdc' bus='scsi'/>
	I0731 20:26:13.218549 1111910 main.go:141] libmachine: (ha-430887-m02)       <readonly/>
	I0731 20:26:13.218560 1111910 main.go:141] libmachine: (ha-430887-m02)     </disk>
	I0731 20:26:13.218575 1111910 main.go:141] libmachine: (ha-430887-m02)     <disk type='file' device='disk'>
	I0731 20:26:13.218587 1111910 main.go:141] libmachine: (ha-430887-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:26:13.218603 1111910 main.go:141] libmachine: (ha-430887-m02)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/ha-430887-m02.rawdisk'/>
	I0731 20:26:13.218613 1111910 main.go:141] libmachine: (ha-430887-m02)       <target dev='hda' bus='virtio'/>
	I0731 20:26:13.218625 1111910 main.go:141] libmachine: (ha-430887-m02)     </disk>
	I0731 20:26:13.218636 1111910 main.go:141] libmachine: (ha-430887-m02)     <interface type='network'>
	I0731 20:26:13.218646 1111910 main.go:141] libmachine: (ha-430887-m02)       <source network='mk-ha-430887'/>
	I0731 20:26:13.218657 1111910 main.go:141] libmachine: (ha-430887-m02)       <model type='virtio'/>
	I0731 20:26:13.218669 1111910 main.go:141] libmachine: (ha-430887-m02)     </interface>
	I0731 20:26:13.218677 1111910 main.go:141] libmachine: (ha-430887-m02)     <interface type='network'>
	I0731 20:26:13.218689 1111910 main.go:141] libmachine: (ha-430887-m02)       <source network='default'/>
	I0731 20:26:13.218699 1111910 main.go:141] libmachine: (ha-430887-m02)       <model type='virtio'/>
	I0731 20:26:13.218711 1111910 main.go:141] libmachine: (ha-430887-m02)     </interface>
	I0731 20:26:13.218725 1111910 main.go:141] libmachine: (ha-430887-m02)     <serial type='pty'>
	I0731 20:26:13.218736 1111910 main.go:141] libmachine: (ha-430887-m02)       <target port='0'/>
	I0731 20:26:13.218744 1111910 main.go:141] libmachine: (ha-430887-m02)     </serial>
	I0731 20:26:13.218767 1111910 main.go:141] libmachine: (ha-430887-m02)     <console type='pty'>
	I0731 20:26:13.218779 1111910 main.go:141] libmachine: (ha-430887-m02)       <target type='serial' port='0'/>
	I0731 20:26:13.218788 1111910 main.go:141] libmachine: (ha-430887-m02)     </console>
	I0731 20:26:13.218802 1111910 main.go:141] libmachine: (ha-430887-m02)     <rng model='virtio'>
	I0731 20:26:13.218816 1111910 main.go:141] libmachine: (ha-430887-m02)       <backend model='random'>/dev/random</backend>
	I0731 20:26:13.218826 1111910 main.go:141] libmachine: (ha-430887-m02)     </rng>
	I0731 20:26:13.218834 1111910 main.go:141] libmachine: (ha-430887-m02)     
	I0731 20:26:13.218840 1111910 main.go:141] libmachine: (ha-430887-m02)     
	I0731 20:26:13.218849 1111910 main.go:141] libmachine: (ha-430887-m02)   </devices>
	I0731 20:26:13.218858 1111910 main.go:141] libmachine: (ha-430887-m02) </domain>
	I0731 20:26:13.218899 1111910 main.go:141] libmachine: (ha-430887-m02) 
	I0731 20:26:13.225601 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:36:d8:14 in network default
	I0731 20:26:13.226161 1111910 main.go:141] libmachine: (ha-430887-m02) Ensuring networks are active...
	I0731 20:26:13.226183 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:13.226973 1111910 main.go:141] libmachine: (ha-430887-m02) Ensuring network default is active
	I0731 20:26:13.227321 1111910 main.go:141] libmachine: (ha-430887-m02) Ensuring network mk-ha-430887 is active
	I0731 20:26:13.227759 1111910 main.go:141] libmachine: (ha-430887-m02) Getting domain xml...
	I0731 20:26:13.228437 1111910 main.go:141] libmachine: (ha-430887-m02) Creating domain...
	I0731 20:26:14.505856 1111910 main.go:141] libmachine: (ha-430887-m02) Waiting to get IP...
	I0731 20:26:14.506750 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:14.507126 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:14.507182 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:14.507110 1112279 retry.go:31] will retry after 296.364136ms: waiting for machine to come up
	I0731 20:26:14.804694 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:14.805270 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:14.805305 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:14.805178 1112279 retry.go:31] will retry after 242.235382ms: waiting for machine to come up
	I0731 20:26:15.048741 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:15.049157 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:15.049191 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:15.049099 1112279 retry.go:31] will retry after 344.680901ms: waiting for machine to come up
	I0731 20:26:15.395869 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:15.396306 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:15.396334 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:15.396271 1112279 retry.go:31] will retry after 392.20081ms: waiting for machine to come up
	I0731 20:26:15.789746 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:15.790090 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:15.790141 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:15.790062 1112279 retry.go:31] will retry after 734.361712ms: waiting for machine to come up
	I0731 20:26:16.526332 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:16.526806 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:16.526838 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:16.526741 1112279 retry.go:31] will retry after 852.201503ms: waiting for machine to come up
	I0731 20:26:17.380742 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:17.381140 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:17.381168 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:17.381097 1112279 retry.go:31] will retry after 717.122097ms: waiting for machine to come up
	I0731 20:26:18.100265 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:18.100650 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:18.100680 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:18.100596 1112279 retry.go:31] will retry after 1.021652149s: waiting for machine to come up
	I0731 20:26:19.124644 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:19.125147 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:19.125179 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:19.125088 1112279 retry.go:31] will retry after 1.407259848s: waiting for machine to come up
	I0731 20:26:20.534586 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:20.535070 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:20.535095 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:20.535045 1112279 retry.go:31] will retry after 1.618860446s: waiting for machine to come up
	I0731 20:26:22.155990 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:22.156574 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:22.156601 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:22.156531 1112279 retry.go:31] will retry after 2.562240882s: waiting for machine to come up
	I0731 20:26:24.721742 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:24.722132 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:24.722155 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:24.722089 1112279 retry.go:31] will retry after 2.774660653s: waiting for machine to come up
	I0731 20:26:27.497869 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:27.498288 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:27.498320 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:27.498231 1112279 retry.go:31] will retry after 3.183060561s: waiting for machine to come up
	I0731 20:26:30.685033 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:30.685443 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find current IP address of domain ha-430887-m02 in network mk-ha-430887
	I0731 20:26:30.685470 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | I0731 20:26:30.685403 1112279 retry.go:31] will retry after 4.312733669s: waiting for machine to come up
	I0731 20:26:35.000851 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:35.001268 1111910 main.go:141] libmachine: (ha-430887-m02) Found IP for machine: 192.168.39.149
	I0731 20:26:35.001293 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has current primary IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:35.001300 1111910 main.go:141] libmachine: (ha-430887-m02) Reserving static IP address...
	I0731 20:26:35.001563 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find host DHCP lease matching {name: "ha-430887-m02", mac: "52:54:00:4a:64:33", ip: "192.168.39.149"} in network mk-ha-430887
	I0731 20:26:35.076504 1111910 main.go:141] libmachine: (ha-430887-m02) Reserved static IP address: 192.168.39.149
	I0731 20:26:35.076540 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Getting to WaitForSSH function...
	I0731 20:26:35.076574 1111910 main.go:141] libmachine: (ha-430887-m02) Waiting for SSH to be available...
	I0731 20:26:35.079205 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:35.079512 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887
	I0731 20:26:35.079543 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | unable to find defined IP address of network mk-ha-430887 interface with MAC address 52:54:00:4a:64:33
	I0731 20:26:35.079662 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using SSH client type: external
	I0731 20:26:35.079694 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa (-rw-------)
	I0731 20:26:35.079723 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:26:35.079736 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | About to run SSH command:
	I0731 20:26:35.079753 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | exit 0
	I0731 20:26:35.083370 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | SSH cmd err, output: exit status 255: 
	I0731 20:26:35.083391 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 20:26:35.083398 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | command : exit 0
	I0731 20:26:35.083403 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | err     : exit status 255
	I0731 20:26:35.083410 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | output  : 
	I0731 20:26:38.083891 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Getting to WaitForSSH function...
	I0731 20:26:38.086581 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.086953 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.086978 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.087099 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using SSH client type: external
	I0731 20:26:38.087125 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa (-rw-------)
	I0731 20:26:38.087148 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:26:38.087158 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | About to run SSH command:
	I0731 20:26:38.087167 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | exit 0
	I0731 20:26:38.212271 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 20:26:38.212558 1111910 main.go:141] libmachine: (ha-430887-m02) KVM machine creation complete!
	I0731 20:26:38.212908 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetConfigRaw
	I0731 20:26:38.213562 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:38.213771 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:38.214043 1111910 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:26:38.214075 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:26:38.215533 1111910 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:26:38.215548 1111910 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:26:38.215554 1111910 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:26:38.215560 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.218056 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.218478 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.218521 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.218660 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.218830 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.219013 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.219127 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.219302 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.219523 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.219534 1111910 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:26:38.323321 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:26:38.323360 1111910 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:26:38.323374 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.326362 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.326782 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.326805 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.326987 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.327332 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.327572 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.327734 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.327890 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.328120 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.328135 1111910 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:26:38.432530 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:26:38.432593 1111910 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:26:38.432600 1111910 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:26:38.432611 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetMachineName
	I0731 20:26:38.432916 1111910 buildroot.go:166] provisioning hostname "ha-430887-m02"
	I0731 20:26:38.432945 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetMachineName
	I0731 20:26:38.433184 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.435455 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.435833 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.435862 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.435994 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.436194 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.436346 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.436499 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.436640 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.436827 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.436842 1111910 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430887-m02 && echo "ha-430887-m02" | sudo tee /etc/hostname
	I0731 20:26:38.553520 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887-m02
	
	I0731 20:26:38.553553 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.556489 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.556883 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.556907 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.557139 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.557407 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.557578 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.557748 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.557917 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.558091 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.558117 1111910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430887-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430887-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430887-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:26:38.672592 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:26:38.672628 1111910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:26:38.672654 1111910 buildroot.go:174] setting up certificates
	I0731 20:26:38.672664 1111910 provision.go:84] configureAuth start
	I0731 20:26:38.672674 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetMachineName
	I0731 20:26:38.673070 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:26:38.676175 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.676534 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.676561 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.676727 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.678998 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.679349 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.679388 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.679517 1111910 provision.go:143] copyHostCerts
	I0731 20:26:38.679572 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:26:38.679609 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:26:38.679617 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:26:38.679686 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:26:38.679765 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:26:38.679784 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:26:38.679791 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:26:38.679817 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:26:38.679878 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:26:38.679899 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:26:38.679905 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:26:38.679929 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:26:38.680027 1111910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.ha-430887-m02 san=[127.0.0.1 192.168.39.149 ha-430887-m02 localhost minikube]
	I0731 20:26:38.823781 1111910 provision.go:177] copyRemoteCerts
	I0731 20:26:38.823860 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:26:38.823892 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.826428 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.826784 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.826820 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.826993 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.827203 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.827377 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.827502 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:26:38.909710 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:26:38.909804 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 20:26:38.932062 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:26:38.932158 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:26:38.953382 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:26:38.953449 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:26:38.974894 1111910 provision.go:87] duration metric: took 302.215899ms to configureAuth
	I0731 20:26:38.974923 1111910 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:26:38.975151 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:26:38.975241 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:38.977685 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.977954 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:38.977983 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:38.978168 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:38.978377 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.978532 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:38.978655 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:38.978803 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:38.978962 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:38.978975 1111910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:26:39.229077 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:26:39.229109 1111910 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:26:39.229119 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetURL
	I0731 20:26:39.230419 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | Using libvirt version 6000000
	I0731 20:26:39.233095 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.233462 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.233489 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.233653 1111910 main.go:141] libmachine: Docker is up and running!
	I0731 20:26:39.233666 1111910 main.go:141] libmachine: Reticulating splines...
	I0731 20:26:39.233673 1111910 client.go:171] duration metric: took 26.580611093s to LocalClient.Create
	I0731 20:26:39.233696 1111910 start.go:167] duration metric: took 26.580674342s to libmachine.API.Create "ha-430887"
	I0731 20:26:39.233707 1111910 start.go:293] postStartSetup for "ha-430887-m02" (driver="kvm2")
	I0731 20:26:39.233724 1111910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:26:39.233750 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.234011 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:26:39.234045 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:39.236209 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.236586 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.236615 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.236732 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:39.236933 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.237099 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:39.237244 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:26:39.318731 1111910 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:26:39.322681 1111910 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:26:39.322711 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:26:39.322782 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:26:39.322854 1111910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 20:26:39.322865 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 20:26:39.322950 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:26:39.331783 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:26:39.353724 1111910 start.go:296] duration metric: took 119.998484ms for postStartSetup
	I0731 20:26:39.353783 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetConfigRaw
	I0731 20:26:39.354389 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:26:39.357184 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.357566 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.357598 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.357831 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:26:39.358050 1111910 start.go:128] duration metric: took 26.7242363s to createHost
	I0731 20:26:39.358079 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:39.360308 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.360693 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.360717 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.360871 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:39.361052 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.361207 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.361416 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:39.361609 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:26:39.361792 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0731 20:26:39.361802 1111910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:26:39.464485 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722457599.444996017
	
	I0731 20:26:39.464522 1111910 fix.go:216] guest clock: 1722457599.444996017
	I0731 20:26:39.464532 1111910 fix.go:229] Guest: 2024-07-31 20:26:39.444996017 +0000 UTC Remote: 2024-07-31 20:26:39.358065032 +0000 UTC m=+80.482218756 (delta=86.930985ms)
	I0731 20:26:39.464556 1111910 fix.go:200] guest clock delta is within tolerance: 86.930985ms
	I0731 20:26:39.464564 1111910 start.go:83] releasing machines lock for "ha-430887-m02", held for 26.830842141s
	I0731 20:26:39.464589 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.464910 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:26:39.467580 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.467956 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.467999 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.470219 1111910 out.go:177] * Found network options:
	I0731 20:26:39.471591 1111910 out.go:177]   - NO_PROXY=192.168.39.195
	W0731 20:26:39.472689 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 20:26:39.472726 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.473291 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.473520 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:26:39.473637 1111910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:26:39.473717 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	W0731 20:26:39.473751 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 20:26:39.473831 1111910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:26:39.473853 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:26:39.476177 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.476531 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.476559 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.476618 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.476715 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:39.476891 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.477027 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:39.477052 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:39.477081 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:39.477194 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:26:39.477276 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:26:39.477329 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:26:39.477469 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:26:39.477607 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:26:39.711016 1111910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:26:39.716793 1111910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:26:39.716870 1111910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:26:39.734638 1111910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:26:39.734663 1111910 start.go:495] detecting cgroup driver to use...
	I0731 20:26:39.734743 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:26:39.753165 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:26:39.767886 1111910 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:26:39.767973 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:26:39.782152 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:26:39.796003 1111910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:26:39.913306 1111910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:26:40.048365 1111910 docker.go:233] disabling docker service ...
	I0731 20:26:40.048455 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:26:40.061809 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:26:40.073628 1111910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:26:40.207469 1111910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:26:40.337871 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:26:40.351142 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:26:40.368003 1111910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:26:40.368081 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.377567 1111910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:26:40.377645 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.387453 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.396923 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.406054 1111910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:26:40.415550 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.424726 1111910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.440278 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:26:40.449650 1111910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:26:40.457983 1111910 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:26:40.458061 1111910 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:26:40.469947 1111910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:26:40.482291 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:26:40.601248 1111910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:26:40.729252 1111910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:26:40.729330 1111910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:26:40.733471 1111910 start.go:563] Will wait 60s for crictl version
	I0731 20:26:40.733506 1111910 ssh_runner.go:195] Run: which crictl
	I0731 20:26:40.736938 1111910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:26:40.771732 1111910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:26:40.771841 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:26:40.797903 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:26:40.826575 1111910 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:26:40.828330 1111910 out.go:177]   - env NO_PROXY=192.168.39.195
	I0731 20:26:40.829666 1111910 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:26:40.832404 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:40.832797 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:26:26 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:26:40.832823 1111910 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:26:40.833032 1111910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:26:40.836968 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:26:40.848813 1111910 mustload.go:65] Loading cluster: ha-430887
	I0731 20:26:40.849068 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:26:40.849432 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:40.849468 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:40.864534 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36723
	I0731 20:26:40.865033 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:40.865516 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:40.865540 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:40.865856 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:40.866043 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:26:40.867540 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:26:40.867968 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:40.868004 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:40.882742 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0731 20:26:40.883135 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:40.883584 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:40.883616 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:40.883964 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:40.884236 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:26:40.884461 1111910 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887 for IP: 192.168.39.149
	I0731 20:26:40.884474 1111910 certs.go:194] generating shared ca certs ...
	I0731 20:26:40.884501 1111910 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:40.884690 1111910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:26:40.884748 1111910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:26:40.884767 1111910 certs.go:256] generating profile certs ...
	I0731 20:26:40.884870 1111910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key
	I0731 20:26:40.884903 1111910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.abdbd490
	I0731 20:26:40.884923 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.abdbd490 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.149 192.168.39.254]
	I0731 20:26:40.985889 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.abdbd490 ...
	I0731 20:26:40.985922 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.abdbd490: {Name:mkb205178a896117b37b860bb0c1e6c1f7ceb4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:40.986141 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.abdbd490 ...
	I0731 20:26:40.986162 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.abdbd490: {Name:mk00df486dd33be11c2b466cc37cc360d6e75de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:26:40.986265 1111910 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.abdbd490 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt
	I0731 20:26:40.986423 1111910 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.abdbd490 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key
	I0731 20:26:40.986611 1111910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key
	I0731 20:26:40.986633 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:26:40.986654 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:26:40.986674 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:26:40.986692 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:26:40.986709 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:26:40.986724 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:26:40.986740 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:26:40.986758 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:26:40.986821 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 20:26:40.986861 1111910 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 20:26:40.986875 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:26:40.986914 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:26:40.986944 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:26:40.986973 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:26:40.987029 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:26:40.987066 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 20:26:40.987086 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:26:40.987105 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 20:26:40.987149 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:26:40.990347 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:40.990727 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:26:40.990752 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:40.990967 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:26:40.991185 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:26:40.991358 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:26:40.991517 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:26:41.064454 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 20:26:41.069124 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 20:26:41.079528 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 20:26:41.083559 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 20:26:41.097602 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 20:26:41.101658 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 20:26:41.111855 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 20:26:41.115804 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 20:26:41.126033 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 20:26:41.129835 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 20:26:41.139268 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 20:26:41.143253 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 20:26:41.153352 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:26:41.176218 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:26:41.197757 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:26:41.221309 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:26:41.244983 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 20:26:41.268650 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:26:41.290463 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:26:41.311653 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:26:41.333376 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 20:26:41.354238 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:26:41.375812 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 20:26:41.396801 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 20:26:41.411292 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 20:26:41.426012 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 20:26:41.440621 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 20:26:41.455272 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 20:26:41.470060 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 20:26:41.484588 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 20:26:41.499352 1111910 ssh_runner.go:195] Run: openssl version
	I0731 20:26:41.504576 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 20:26:41.515341 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 20:26:41.520289 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 20:26:41.520355 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 20:26:41.525767 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:26:41.535441 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:26:41.544920 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:26:41.548814 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:26:41.548880 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:26:41.554115 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:26:41.563602 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 20:26:41.572960 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 20:26:41.576827 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 20:26:41.576879 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 20:26:41.581952 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 20:26:41.591530 1111910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:26:41.595068 1111910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:26:41.595134 1111910 kubeadm.go:934] updating node {m02 192.168.39.149 8443 v1.30.3 crio true true} ...
	I0731 20:26:41.595226 1111910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-430887-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:26:41.595251 1111910 kube-vip.go:115] generating kube-vip config ...
	I0731 20:26:41.595283 1111910 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 20:26:41.609866 1111910 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 20:26:41.609946 1111910 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 20:26:41.610015 1111910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:26:41.618696 1111910 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 20:26:41.618764 1111910 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 20:26:41.627217 1111910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 20:26:41.627246 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 20:26:41.627326 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 20:26:41.627336 1111910 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 20:26:41.627366 1111910 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 20:26:41.631120 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 20:26:41.631149 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 20:26:43.082090 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:26:43.096943 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 20:26:43.097061 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 20:26:43.101147 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 20:26:43.101192 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 20:26:49.210010 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 20:26:49.210111 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 20:26:49.214827 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 20:26:49.214866 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 20:26:49.416229 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 20:26:49.425127 1111910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 20:26:49.440316 1111910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:26:49.455305 1111910 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 20:26:49.470292 1111910 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 20:26:49.473905 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:26:49.485088 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:26:49.598242 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:26:49.613566 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:26:49.613993 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:26:49.614038 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:26:49.629539 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0731 20:26:49.630002 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:26:49.630492 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:26:49.630521 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:26:49.630885 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:26:49.631064 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:26:49.631225 1111910 start.go:317] joinCluster: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:26:49.631361 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 20:26:49.631391 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:26:49.634093 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:49.634470 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:26:49.634499 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:26:49.634601 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:26:49.634773 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:26:49.634910 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:26:49.635043 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:26:49.772864 1111910 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:26:49.772931 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xqbiox.ds69zqx06io5ro58 --discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-430887-m02 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443"
	I0731 20:27:09.394637 1111910 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xqbiox.ds69zqx06io5ro58 --discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-430887-m02 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443": (19.62167686s)
	I0731 20:27:09.394685 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 20:27:09.949824 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-430887-m02 minikube.k8s.io/updated_at=2024_07_31T20_27_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=ha-430887 minikube.k8s.io/primary=false
	I0731 20:27:10.064162 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-430887-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 20:27:10.191263 1111910 start.go:319] duration metric: took 20.56003215s to joinCluster
	I0731 20:27:10.191365 1111910 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:27:10.191644 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:27:10.192629 1111910 out.go:177] * Verifying Kubernetes components...
	I0731 20:27:10.193900 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:27:10.434173 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:27:10.462180 1111910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:27:10.462499 1111910 kapi.go:59] client config for ha-430887: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt", KeyFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key", CAFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 20:27:10.462567 1111910 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.195:8443
	I0731 20:27:10.462890 1111910 node_ready.go:35] waiting up to 6m0s for node "ha-430887-m02" to be "Ready" ...
	I0731 20:27:10.462994 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:10.463003 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:10.463011 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:10.463014 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:10.487053 1111910 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0731 20:27:10.963184 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:10.963209 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:10.963217 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:10.963221 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:10.969955 1111910 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 20:27:11.464103 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:11.464129 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:11.464139 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:11.464143 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:11.514731 1111910 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I0731 20:27:11.963905 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:11.963935 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:11.963948 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:11.963955 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:11.967079 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:12.464179 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:12.464208 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:12.464219 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:12.464227 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:12.467975 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:12.468614 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:12.963304 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:12.963330 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:12.963339 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:12.963347 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:12.966502 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:13.463181 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:13.463204 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:13.463212 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:13.463216 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:13.466931 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:13.964077 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:13.964117 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:13.964130 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:13.964135 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:13.967076 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:14.464107 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:14.464141 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:14.464152 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:14.464163 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:14.469219 1111910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 20:27:14.469765 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:14.963674 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:14.963700 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:14.963713 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:14.963721 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:14.966415 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:15.463216 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:15.463244 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:15.463252 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:15.463257 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:15.466461 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:15.963816 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:15.963844 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:15.963855 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:15.963862 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:15.967018 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:16.464164 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:16.464188 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:16.464198 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:16.464203 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:16.473514 1111910 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 20:27:16.474125 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:16.963440 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:16.963465 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:16.963474 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:16.963477 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:16.966989 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:17.463138 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:17.463164 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:17.463172 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:17.463176 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:17.466092 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:17.963578 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:17.963612 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:17.963625 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:17.963631 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:17.966690 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:18.463154 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:18.463181 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:18.463192 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:18.463197 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:18.468351 1111910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 20:27:18.963824 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:18.963848 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:18.963856 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:18.963860 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:18.966928 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:18.967729 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:19.463361 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:19.463386 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:19.463395 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:19.463402 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:19.466319 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:19.963623 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:19.963650 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:19.963662 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:19.963674 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:19.966633 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:20.463507 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:20.463531 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:20.463540 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:20.463545 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:20.468084 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:20.964115 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:20.964139 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:20.964147 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:20.964152 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:20.967140 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:21.463895 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:21.463919 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:21.463927 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:21.463931 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:21.466884 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:21.467449 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:21.963986 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:21.964013 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:21.964025 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:21.964033 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:21.967007 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:22.463982 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:22.464008 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:22.464019 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:22.464026 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:22.473217 1111910 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 20:27:22.963730 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:22.963754 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:22.963762 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:22.963768 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:22.968700 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:23.463464 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:23.463494 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:23.463507 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:23.463512 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:23.466573 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:23.963354 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:23.963376 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:23.963386 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:23.963391 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:23.966169 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:23.966925 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:24.463459 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:24.463492 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:24.463503 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:24.463525 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:24.468003 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:24.963296 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:24.963326 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:24.963338 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:24.963343 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:24.966267 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:25.463403 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:25.463428 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:25.463436 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:25.463440 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:25.466336 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:25.963317 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:25.963339 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:25.963348 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:25.963353 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:25.966590 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:25.967161 1111910 node_ready.go:53] node "ha-430887-m02" has status "Ready":"False"
	I0731 20:27:26.464157 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:26.464186 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:26.464199 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:26.464206 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:26.468947 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:26.963510 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:26.963534 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:26.963541 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:26.963545 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:26.966817 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:27.464048 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:27.464073 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.464082 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.464085 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.466712 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.963091 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:27.963114 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.963123 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.963127 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.966751 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:27.967374 1111910 node_ready.go:49] node "ha-430887-m02" has status "Ready":"True"
	I0731 20:27:27.967395 1111910 node_ready.go:38] duration metric: took 17.504481571s for node "ha-430887-m02" to be "Ready" ...
	I0731 20:27:27.967406 1111910 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:27:27.967476 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:27.967487 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.967497 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.967504 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.971987 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:27.978919 1111910 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.979026 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rhlnq
	I0731 20:27:27.979036 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.979045 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.979053 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.981720 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.982478 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:27.982494 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.982501 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.982508 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.985374 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.986090 1111910 pod_ready.go:92] pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:27.986109 1111910 pod_ready.go:81] duration metric: took 7.166365ms for pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.986117 1111910 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.986166 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tkm49
	I0731 20:27:27.986174 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.986181 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.986185 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.988450 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.989279 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:27.989295 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.989306 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.989311 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.991607 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.992035 1111910 pod_ready.go:92] pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:27.992050 1111910 pod_ready.go:81] duration metric: took 5.927492ms for pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.992057 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.992124 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887
	I0731 20:27:27.992133 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.992139 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.992143 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.994648 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.995292 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:27.995308 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.995315 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.995319 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:27.997349 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:27.997899 1111910 pod_ready.go:92] pod "etcd-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:27.997916 1111910 pod_ready.go:81] duration metric: took 5.852465ms for pod "etcd-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.997926 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:27.997969 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887-m02
	I0731 20:27:27.997976 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:27.997983 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:27.997987 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.000162 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:28.000811 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:28.000827 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.000834 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.000838 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.002759 1111910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 20:27:28.003316 1111910 pod_ready.go:92] pod "etcd-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:28.003337 1111910 pod_ready.go:81] duration metric: took 5.404252ms for pod "etcd-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.003354 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.163795 1111910 request.go:629] Waited for 160.355999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887
	I0731 20:27:28.163873 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887
	I0731 20:27:28.163882 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.163908 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.163919 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.167277 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:28.363281 1111910 request.go:629] Waited for 195.296847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:28.363384 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:28.363393 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.363401 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.363407 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.366585 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:28.367170 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:28.367195 1111910 pod_ready.go:81] duration metric: took 363.830066ms for pod "kube-apiserver-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.367205 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.563355 1111910 request.go:629] Waited for 196.072187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m02
	I0731 20:27:28.563468 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m02
	I0731 20:27:28.563479 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.563490 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.563501 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.566800 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:28.764150 1111910 request.go:629] Waited for 196.3672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:28.764213 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:28.764218 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.764225 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.764230 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.767319 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:28.767791 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:28.767818 1111910 pod_ready.go:81] duration metric: took 400.603794ms for pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.767841 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:28.963410 1111910 request.go:629] Waited for 195.465822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887
	I0731 20:27:28.963475 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887
	I0731 20:27:28.963483 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:28.963494 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:28.963503 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:28.966246 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:29.163142 1111910 request.go:629] Waited for 196.318172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:29.163226 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:29.163231 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.163239 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.163243 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.166099 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:29.166628 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:29.166646 1111910 pod_ready.go:81] duration metric: took 398.795222ms for pod "kube-controller-manager-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.166656 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.363744 1111910 request.go:629] Waited for 196.991252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m02
	I0731 20:27:29.363809 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m02
	I0731 20:27:29.363815 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.363827 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.363832 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.366932 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:29.563917 1111910 request.go:629] Waited for 196.383476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:29.564004 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:29.564011 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.564020 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.564023 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.567070 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:29.567552 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:29.567571 1111910 pod_ready.go:81] duration metric: took 400.909526ms for pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.567583 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hsd92" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.764146 1111910 request.go:629] Waited for 196.452227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsd92
	I0731 20:27:29.764225 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsd92
	I0731 20:27:29.764236 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.764248 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.764254 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.767329 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:29.963262 1111910 request.go:629] Waited for 195.292706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:29.963346 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:29.963352 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:29.963360 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:29.963367 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:29.966479 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:29.966999 1111910 pod_ready.go:92] pod "kube-proxy-hsd92" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:29.967026 1111910 pod_ready.go:81] duration metric: took 399.435841ms for pod "kube-proxy-hsd92" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:29.967039 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m49fz" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.163999 1111910 request.go:629] Waited for 196.881062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m49fz
	I0731 20:27:30.164104 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m49fz
	I0731 20:27:30.164114 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.164122 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.164126 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.167165 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:30.363179 1111910 request.go:629] Waited for 195.295874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:30.363263 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:30.363269 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.363279 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.363286 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.366080 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:30.366646 1111910 pod_ready.go:92] pod "kube-proxy-m49fz" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:30.366670 1111910 pod_ready.go:81] duration metric: took 399.622051ms for pod "kube-proxy-m49fz" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.366679 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.563714 1111910 request.go:629] Waited for 196.9429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887
	I0731 20:27:30.563785 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887
	I0731 20:27:30.563791 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.563799 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.563805 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.566691 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:30.763977 1111910 request.go:629] Waited for 196.357655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:30.764072 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:27:30.764083 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.764107 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.764113 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.767123 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:30.767647 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:30.767668 1111910 pod_ready.go:81] duration metric: took 400.981891ms for pod "kube-scheduler-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.767682 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:30.963908 1111910 request.go:629] Waited for 196.144056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m02
	I0731 20:27:30.963989 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m02
	I0731 20:27:30.963994 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:30.964002 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:30.964006 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:30.966914 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:27:31.164000 1111910 request.go:629] Waited for 196.377786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:31.164076 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:27:31.164084 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.164105 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.164113 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.167404 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:31.168063 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:27:31.168103 1111910 pod_ready.go:81] duration metric: took 400.396907ms for pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:27:31.168120 1111910 pod_ready.go:38] duration metric: took 3.200700036s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:27:31.168147 1111910 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:27:31.168221 1111910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:27:31.182225 1111910 api_server.go:72] duration metric: took 20.990809123s to wait for apiserver process to appear ...
	I0731 20:27:31.182252 1111910 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:27:31.182279 1111910 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0731 20:27:31.187695 1111910 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0731 20:27:31.187786 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/version
	I0731 20:27:31.187799 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.187808 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.187817 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.188697 1111910 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 20:27:31.188964 1111910 api_server.go:141] control plane version: v1.30.3
	I0731 20:27:31.188990 1111910 api_server.go:131] duration metric: took 6.730148ms to wait for apiserver health ...
	I0731 20:27:31.189001 1111910 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:27:31.363428 1111910 request.go:629] Waited for 174.34329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:31.363512 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:31.363520 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.363530 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.363534 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.368392 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:31.372457 1111910 system_pods.go:59] 17 kube-system pods found
	I0731 20:27:31.372482 1111910 system_pods.go:61] "coredns-7db6d8ff4d-rhlnq" [3a333762-0e0a-4a9a-bede-b6cf8a2b221c] Running
	I0731 20:27:31.372487 1111910 system_pods.go:61] "coredns-7db6d8ff4d-tkm49" [5c751586-1fd3-4ebc-8d3f-602f3a70c3ac] Running
	I0731 20:27:31.372491 1111910 system_pods.go:61] "etcd-ha-430887" [c1505419-fc9a-442e-99a0-ba065faa840f] Running
	I0731 20:27:31.372496 1111910 system_pods.go:61] "etcd-ha-430887-m02" [51a3c519-0fab-4340-a484-8d382bec8c4f] Running
	I0731 20:27:31.372499 1111910 system_pods.go:61] "kindnet-49h86" [5e5b0c1c-ff0c-422c-9d94-a0142fd2d4d5] Running
	I0731 20:27:31.372502 1111910 system_pods.go:61] "kindnet-xmjzn" [13a3055d-bcf0-472f-b9f6-787e6f4499cb] Running
	I0731 20:27:31.372505 1111910 system_pods.go:61] "kube-apiserver-ha-430887" [602c04df-b310-4bca-8960-8d24c59e2919] Running
	I0731 20:27:31.372508 1111910 system_pods.go:61] "kube-apiserver-ha-430887-m02" [8e0b7edc-d079-4d14-81ee-5b2ab37239c6] Running
	I0731 20:27:31.372511 1111910 system_pods.go:61] "kube-controller-manager-ha-430887" [682793cf-2b76-4483-9926-1733c17c09cc] Running
	I0731 20:27:31.372514 1111910 system_pods.go:61] "kube-controller-manager-ha-430887-m02" [183243c7-be52-4c3d-b41b-cf6eefc1c669] Running
	I0731 20:27:31.372517 1111910 system_pods.go:61] "kube-proxy-hsd92" [9ec64df5-ccc0-4927-87e0-819d66291037] Running
	I0731 20:27:31.372520 1111910 system_pods.go:61] "kube-proxy-m49fz" [6686467c-0177-47b5-a286-cf718c901436] Running
	I0731 20:27:31.372526 1111910 system_pods.go:61] "kube-scheduler-ha-430887" [3c22927a-2760-49ae-9aea-2f09194581c2] Running
	I0731 20:27:31.372532 1111910 system_pods.go:61] "kube-scheduler-ha-430887-m02" [23a00525-1647-44bc-abfa-5e6db2131442] Running
	I0731 20:27:31.372535 1111910 system_pods.go:61] "kube-vip-ha-430887" [516521a0-b217-407d-90ee-917c6cb6991a] Running
	I0731 20:27:31.372537 1111910 system_pods.go:61] "kube-vip-ha-430887-m02" [421d15be-6980-4c04-b2bc-05ed559f2f2e] Running
	I0731 20:27:31.372543 1111910 system_pods.go:61] "storage-provisioner" [1eb16097-a994-4b42-b876-ebe7d6022be6] Running
	I0731 20:27:31.372550 1111910 system_pods.go:74] duration metric: took 183.538397ms to wait for pod list to return data ...
	I0731 20:27:31.372560 1111910 default_sa.go:34] waiting for default service account to be created ...
	I0731 20:27:31.563997 1111910 request.go:629] Waited for 191.354002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0731 20:27:31.564105 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0731 20:27:31.564115 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.564124 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.564132 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.567231 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:31.567521 1111910 default_sa.go:45] found service account: "default"
	I0731 20:27:31.567548 1111910 default_sa.go:55] duration metric: took 194.97748ms for default service account to be created ...
	I0731 20:27:31.567559 1111910 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 20:27:31.764058 1111910 request.go:629] Waited for 196.398195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:31.764136 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:27:31.764142 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.764150 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.764156 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.768830 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:27:31.772955 1111910 system_pods.go:86] 17 kube-system pods found
	I0731 20:27:31.772983 1111910 system_pods.go:89] "coredns-7db6d8ff4d-rhlnq" [3a333762-0e0a-4a9a-bede-b6cf8a2b221c] Running
	I0731 20:27:31.772990 1111910 system_pods.go:89] "coredns-7db6d8ff4d-tkm49" [5c751586-1fd3-4ebc-8d3f-602f3a70c3ac] Running
	I0731 20:27:31.772997 1111910 system_pods.go:89] "etcd-ha-430887" [c1505419-fc9a-442e-99a0-ba065faa840f] Running
	I0731 20:27:31.773003 1111910 system_pods.go:89] "etcd-ha-430887-m02" [51a3c519-0fab-4340-a484-8d382bec8c4f] Running
	I0731 20:27:31.773009 1111910 system_pods.go:89] "kindnet-49h86" [5e5b0c1c-ff0c-422c-9d94-a0142fd2d4d5] Running
	I0731 20:27:31.773014 1111910 system_pods.go:89] "kindnet-xmjzn" [13a3055d-bcf0-472f-b9f6-787e6f4499cb] Running
	I0731 20:27:31.773020 1111910 system_pods.go:89] "kube-apiserver-ha-430887" [602c04df-b310-4bca-8960-8d24c59e2919] Running
	I0731 20:27:31.773026 1111910 system_pods.go:89] "kube-apiserver-ha-430887-m02" [8e0b7edc-d079-4d14-81ee-5b2ab37239c6] Running
	I0731 20:27:31.773032 1111910 system_pods.go:89] "kube-controller-manager-ha-430887" [682793cf-2b76-4483-9926-1733c17c09cc] Running
	I0731 20:27:31.773040 1111910 system_pods.go:89] "kube-controller-manager-ha-430887-m02" [183243c7-be52-4c3d-b41b-cf6eefc1c669] Running
	I0731 20:27:31.773050 1111910 system_pods.go:89] "kube-proxy-hsd92" [9ec64df5-ccc0-4927-87e0-819d66291037] Running
	I0731 20:27:31.773060 1111910 system_pods.go:89] "kube-proxy-m49fz" [6686467c-0177-47b5-a286-cf718c901436] Running
	I0731 20:27:31.773068 1111910 system_pods.go:89] "kube-scheduler-ha-430887" [3c22927a-2760-49ae-9aea-2f09194581c2] Running
	I0731 20:27:31.773076 1111910 system_pods.go:89] "kube-scheduler-ha-430887-m02" [23a00525-1647-44bc-abfa-5e6db2131442] Running
	I0731 20:27:31.773085 1111910 system_pods.go:89] "kube-vip-ha-430887" [516521a0-b217-407d-90ee-917c6cb6991a] Running
	I0731 20:27:31.773090 1111910 system_pods.go:89] "kube-vip-ha-430887-m02" [421d15be-6980-4c04-b2bc-05ed559f2f2e] Running
	I0731 20:27:31.773097 1111910 system_pods.go:89] "storage-provisioner" [1eb16097-a994-4b42-b876-ebe7d6022be6] Running
	I0731 20:27:31.773110 1111910 system_pods.go:126] duration metric: took 205.539527ms to wait for k8s-apps to be running ...
	I0731 20:27:31.773123 1111910 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 20:27:31.773181 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:27:31.786395 1111910 system_svc.go:56] duration metric: took 13.263755ms WaitForService to wait for kubelet
	I0731 20:27:31.786425 1111910 kubeadm.go:582] duration metric: took 21.595015678s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:27:31.786447 1111910 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:27:31.963812 1111910 request.go:629] Waited for 177.278545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes
	I0731 20:27:31.963877 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes
	I0731 20:27:31.963882 1111910 round_trippers.go:469] Request Headers:
	I0731 20:27:31.963891 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:27:31.963895 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:27:31.967186 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:27:31.968033 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:27:31.968059 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:27:31.968082 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:27:31.968102 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:27:31.968110 1111910 node_conditions.go:105] duration metric: took 181.656598ms to run NodePressure ...
	I0731 20:27:31.968127 1111910 start.go:241] waiting for startup goroutines ...
	I0731 20:27:31.968169 1111910 start.go:255] writing updated cluster config ...
	I0731 20:27:31.970116 1111910 out.go:177] 
	I0731 20:27:31.971456 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:27:31.971557 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:27:31.972967 1111910 out.go:177] * Starting "ha-430887-m03" control-plane node in "ha-430887" cluster
	I0731 20:27:31.974051 1111910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:27:31.974073 1111910 cache.go:56] Caching tarball of preloaded images
	I0731 20:27:31.974199 1111910 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:27:31.974212 1111910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:27:31.974324 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:27:31.974524 1111910 start.go:360] acquireMachinesLock for ha-430887-m03: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:27:31.974580 1111910 start.go:364] duration metric: took 28.878µs to acquireMachinesLock for "ha-430887-m03"
	I0731 20:27:31.974604 1111910 start.go:93] Provisioning new machine with config: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:27:31.974749 1111910 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 20:27:31.976083 1111910 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 20:27:31.976194 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:27:31.976230 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:27:31.991630 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
	I0731 20:27:31.992116 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:27:31.992670 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:27:31.992696 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:27:31.993083 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:27:31.993299 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetMachineName
	I0731 20:27:31.993461 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:31.993660 1111910 start.go:159] libmachine.API.Create for "ha-430887" (driver="kvm2")
	I0731 20:27:31.993691 1111910 client.go:168] LocalClient.Create starting
	I0731 20:27:31.993725 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 20:27:31.993766 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:27:31.993785 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:27:31.993868 1111910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 20:27:31.993896 1111910 main.go:141] libmachine: Decoding PEM data...
	I0731 20:27:31.993913 1111910 main.go:141] libmachine: Parsing certificate...
	I0731 20:27:31.993938 1111910 main.go:141] libmachine: Running pre-create checks...
	I0731 20:27:31.993948 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .PreCreateCheck
	I0731 20:27:31.994185 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetConfigRaw
	I0731 20:27:31.994626 1111910 main.go:141] libmachine: Creating machine...
	I0731 20:27:31.994641 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .Create
	I0731 20:27:31.994763 1111910 main.go:141] libmachine: (ha-430887-m03) Creating KVM machine...
	I0731 20:27:31.996124 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found existing default KVM network
	I0731 20:27:31.996264 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found existing private KVM network mk-ha-430887
	I0731 20:27:31.996463 1111910 main.go:141] libmachine: (ha-430887-m03) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03 ...
	I0731 20:27:31.996487 1111910 main.go:141] libmachine: (ha-430887-m03) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:27:31.996557 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:31.996455 1112687 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:27:31.996708 1111910 main.go:141] libmachine: (ha-430887-m03) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 20:27:32.286637 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:32.286507 1112687 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa...
	I0731 20:27:32.597988 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:32.597833 1112687 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/ha-430887-m03.rawdisk...
	I0731 20:27:32.598038 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Writing magic tar header
	I0731 20:27:32.598053 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Writing SSH key tar header
	I0731 20:27:32.598069 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:32.597994 1112687 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03 ...
	I0731 20:27:32.598172 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03
	I0731 20:27:32.598197 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03 (perms=drwx------)
	I0731 20:27:32.598204 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 20:27:32.598220 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:27:32.598233 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 20:27:32.598245 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:27:32.598256 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 20:27:32.598265 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 20:27:32.598271 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:27:32.598280 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:27:32.598287 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Checking permissions on dir: /home
	I0731 20:27:32.598302 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Skipping /home - not owner
	I0731 20:27:32.598314 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:27:32.598331 1111910 main.go:141] libmachine: (ha-430887-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:27:32.598344 1111910 main.go:141] libmachine: (ha-430887-m03) Creating domain...
	I0731 20:27:32.599319 1111910 main.go:141] libmachine: (ha-430887-m03) define libvirt domain using xml: 
	I0731 20:27:32.599342 1111910 main.go:141] libmachine: (ha-430887-m03) <domain type='kvm'>
	I0731 20:27:32.599353 1111910 main.go:141] libmachine: (ha-430887-m03)   <name>ha-430887-m03</name>
	I0731 20:27:32.599371 1111910 main.go:141] libmachine: (ha-430887-m03)   <memory unit='MiB'>2200</memory>
	I0731 20:27:32.599384 1111910 main.go:141] libmachine: (ha-430887-m03)   <vcpu>2</vcpu>
	I0731 20:27:32.599395 1111910 main.go:141] libmachine: (ha-430887-m03)   <features>
	I0731 20:27:32.599407 1111910 main.go:141] libmachine: (ha-430887-m03)     <acpi/>
	I0731 20:27:32.599416 1111910 main.go:141] libmachine: (ha-430887-m03)     <apic/>
	I0731 20:27:32.599427 1111910 main.go:141] libmachine: (ha-430887-m03)     <pae/>
	I0731 20:27:32.599438 1111910 main.go:141] libmachine: (ha-430887-m03)     
	I0731 20:27:32.599478 1111910 main.go:141] libmachine: (ha-430887-m03)   </features>
	I0731 20:27:32.599503 1111910 main.go:141] libmachine: (ha-430887-m03)   <cpu mode='host-passthrough'>
	I0731 20:27:32.599516 1111910 main.go:141] libmachine: (ha-430887-m03)   
	I0731 20:27:32.599526 1111910 main.go:141] libmachine: (ha-430887-m03)   </cpu>
	I0731 20:27:32.599535 1111910 main.go:141] libmachine: (ha-430887-m03)   <os>
	I0731 20:27:32.599546 1111910 main.go:141] libmachine: (ha-430887-m03)     <type>hvm</type>
	I0731 20:27:32.599558 1111910 main.go:141] libmachine: (ha-430887-m03)     <boot dev='cdrom'/>
	I0731 20:27:32.599581 1111910 main.go:141] libmachine: (ha-430887-m03)     <boot dev='hd'/>
	I0731 20:27:32.599606 1111910 main.go:141] libmachine: (ha-430887-m03)     <bootmenu enable='no'/>
	I0731 20:27:32.599618 1111910 main.go:141] libmachine: (ha-430887-m03)   </os>
	I0731 20:27:32.599626 1111910 main.go:141] libmachine: (ha-430887-m03)   <devices>
	I0731 20:27:32.599640 1111910 main.go:141] libmachine: (ha-430887-m03)     <disk type='file' device='cdrom'>
	I0731 20:27:32.599654 1111910 main.go:141] libmachine: (ha-430887-m03)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/boot2docker.iso'/>
	I0731 20:27:32.599664 1111910 main.go:141] libmachine: (ha-430887-m03)       <target dev='hdc' bus='scsi'/>
	I0731 20:27:32.599669 1111910 main.go:141] libmachine: (ha-430887-m03)       <readonly/>
	I0731 20:27:32.599699 1111910 main.go:141] libmachine: (ha-430887-m03)     </disk>
	I0731 20:27:32.599719 1111910 main.go:141] libmachine: (ha-430887-m03)     <disk type='file' device='disk'>
	I0731 20:27:32.599736 1111910 main.go:141] libmachine: (ha-430887-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:27:32.599751 1111910 main.go:141] libmachine: (ha-430887-m03)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/ha-430887-m03.rawdisk'/>
	I0731 20:27:32.599765 1111910 main.go:141] libmachine: (ha-430887-m03)       <target dev='hda' bus='virtio'/>
	I0731 20:27:32.599774 1111910 main.go:141] libmachine: (ha-430887-m03)     </disk>
	I0731 20:27:32.599784 1111910 main.go:141] libmachine: (ha-430887-m03)     <interface type='network'>
	I0731 20:27:32.599800 1111910 main.go:141] libmachine: (ha-430887-m03)       <source network='mk-ha-430887'/>
	I0731 20:27:32.599812 1111910 main.go:141] libmachine: (ha-430887-m03)       <model type='virtio'/>
	I0731 20:27:32.599822 1111910 main.go:141] libmachine: (ha-430887-m03)     </interface>
	I0731 20:27:32.599836 1111910 main.go:141] libmachine: (ha-430887-m03)     <interface type='network'>
	I0731 20:27:32.599848 1111910 main.go:141] libmachine: (ha-430887-m03)       <source network='default'/>
	I0731 20:27:32.599861 1111910 main.go:141] libmachine: (ha-430887-m03)       <model type='virtio'/>
	I0731 20:27:32.599875 1111910 main.go:141] libmachine: (ha-430887-m03)     </interface>
	I0731 20:27:32.599887 1111910 main.go:141] libmachine: (ha-430887-m03)     <serial type='pty'>
	I0731 20:27:32.599897 1111910 main.go:141] libmachine: (ha-430887-m03)       <target port='0'/>
	I0731 20:27:32.599907 1111910 main.go:141] libmachine: (ha-430887-m03)     </serial>
	I0731 20:27:32.599918 1111910 main.go:141] libmachine: (ha-430887-m03)     <console type='pty'>
	I0731 20:27:32.599930 1111910 main.go:141] libmachine: (ha-430887-m03)       <target type='serial' port='0'/>
	I0731 20:27:32.599940 1111910 main.go:141] libmachine: (ha-430887-m03)     </console>
	I0731 20:27:32.599949 1111910 main.go:141] libmachine: (ha-430887-m03)     <rng model='virtio'>
	I0731 20:27:32.599963 1111910 main.go:141] libmachine: (ha-430887-m03)       <backend model='random'>/dev/random</backend>
	I0731 20:27:32.599974 1111910 main.go:141] libmachine: (ha-430887-m03)     </rng>
	I0731 20:27:32.599984 1111910 main.go:141] libmachine: (ha-430887-m03)     
	I0731 20:27:32.599992 1111910 main.go:141] libmachine: (ha-430887-m03)     
	I0731 20:27:32.600004 1111910 main.go:141] libmachine: (ha-430887-m03)   </devices>
	I0731 20:27:32.600014 1111910 main.go:141] libmachine: (ha-430887-m03) </domain>
	I0731 20:27:32.600026 1111910 main.go:141] libmachine: (ha-430887-m03) 
	I0731 20:27:32.607824 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:f2:bd:db in network default
	I0731 20:27:32.608459 1111910 main.go:141] libmachine: (ha-430887-m03) Ensuring networks are active...
	I0731 20:27:32.608476 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:32.609170 1111910 main.go:141] libmachine: (ha-430887-m03) Ensuring network default is active
	I0731 20:27:32.609476 1111910 main.go:141] libmachine: (ha-430887-m03) Ensuring network mk-ha-430887 is active
	I0731 20:27:32.609833 1111910 main.go:141] libmachine: (ha-430887-m03) Getting domain xml...
	I0731 20:27:32.610534 1111910 main.go:141] libmachine: (ha-430887-m03) Creating domain...
	I0731 20:27:33.830734 1111910 main.go:141] libmachine: (ha-430887-m03) Waiting to get IP...
	I0731 20:27:33.831662 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:33.832049 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:33.832079 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:33.832025 1112687 retry.go:31] will retry after 254.049554ms: waiting for machine to come up
	I0731 20:27:34.087542 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:34.088027 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:34.088056 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:34.087980 1112687 retry.go:31] will retry after 271.956827ms: waiting for machine to come up
	I0731 20:27:34.361595 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:34.362065 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:34.362097 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:34.362045 1112687 retry.go:31] will retry after 481.093647ms: waiting for machine to come up
	I0731 20:27:34.844678 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:34.845084 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:34.845107 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:34.845047 1112687 retry.go:31] will retry after 553.436017ms: waiting for machine to come up
	I0731 20:27:35.399824 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:35.400216 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:35.400263 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:35.400174 1112687 retry.go:31] will retry after 573.943855ms: waiting for machine to come up
	I0731 20:27:35.976809 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:35.977282 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:35.977311 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:35.977230 1112687 retry.go:31] will retry after 719.564235ms: waiting for machine to come up
	I0731 20:27:36.698107 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:36.698492 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:36.698517 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:36.698463 1112687 retry.go:31] will retry after 843.432167ms: waiting for machine to come up
	I0731 20:27:37.543764 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:37.544288 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:37.544314 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:37.544236 1112687 retry.go:31] will retry after 1.27103611s: waiting for machine to come up
	I0731 20:27:38.817349 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:38.817839 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:38.817865 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:38.817797 1112687 retry.go:31] will retry after 1.569967185s: waiting for machine to come up
	I0731 20:27:40.389169 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:40.389722 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:40.389749 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:40.389681 1112687 retry.go:31] will retry after 2.27233384s: waiting for machine to come up
	I0731 20:27:42.664409 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:42.664907 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:42.664938 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:42.664850 1112687 retry.go:31] will retry after 2.169072633s: waiting for machine to come up
	I0731 20:27:44.837083 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:44.837448 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:44.837472 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:44.837413 1112687 retry.go:31] will retry after 2.737790564s: waiting for machine to come up
	I0731 20:27:47.577033 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:47.577418 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:47.577445 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:47.577369 1112687 retry.go:31] will retry after 3.226247613s: waiting for machine to come up
	I0731 20:27:50.805074 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:50.805502 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find current IP address of domain ha-430887-m03 in network mk-ha-430887
	I0731 20:27:50.805528 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | I0731 20:27:50.805455 1112687 retry.go:31] will retry after 4.606974131s: waiting for machine to come up
	I0731 20:27:55.416718 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.417104 1111910 main.go:141] libmachine: (ha-430887-m03) Found IP for machine: 192.168.39.44
	I0731 20:27:55.417133 1111910 main.go:141] libmachine: (ha-430887-m03) Reserving static IP address...
	I0731 20:27:55.417146 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has current primary IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.417667 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | unable to find host DHCP lease matching {name: "ha-430887-m03", mac: "52:54:00:52:fa:c0", ip: "192.168.39.44"} in network mk-ha-430887
	I0731 20:27:55.492542 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Getting to WaitForSSH function...
	I0731 20:27:55.492580 1111910 main.go:141] libmachine: (ha-430887-m03) Reserved static IP address: 192.168.39.44
	I0731 20:27:55.492594 1111910 main.go:141] libmachine: (ha-430887-m03) Waiting for SSH to be available...
	I0731 20:27:55.495071 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.495489 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.495519 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.495687 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Using SSH client type: external
	I0731 20:27:55.495719 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa (-rw-------)
	I0731 20:27:55.495755 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:27:55.495773 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | About to run SSH command:
	I0731 20:27:55.495790 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | exit 0
	I0731 20:27:55.615853 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 20:27:55.616240 1111910 main.go:141] libmachine: (ha-430887-m03) KVM machine creation complete!
	I0731 20:27:55.616518 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetConfigRaw
	I0731 20:27:55.617069 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:55.617296 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:55.617490 1111910 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:27:55.617518 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:27:55.618707 1111910 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:27:55.618724 1111910 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:27:55.618732 1111910 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:27:55.618740 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:55.620837 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.621225 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.621250 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.621421 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:55.621598 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.621758 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.621880 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:55.622039 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:55.622270 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:55.622281 1111910 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:27:55.719264 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:27:55.719291 1111910 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:27:55.719303 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:55.721845 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.722169 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.722197 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.722350 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:55.722537 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.722704 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.722868 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:55.723055 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:55.723250 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:55.723262 1111910 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:27:55.820540 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:27:55.820634 1111910 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:27:55.820648 1111910 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:27:55.820661 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetMachineName
	I0731 20:27:55.820919 1111910 buildroot.go:166] provisioning hostname "ha-430887-m03"
	I0731 20:27:55.820944 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetMachineName
	I0731 20:27:55.821132 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:55.823922 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.824353 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.824378 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.824570 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:55.824755 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.824938 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.825095 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:55.825278 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:55.825519 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:55.825539 1111910 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430887-m03 && echo "ha-430887-m03" | sudo tee /etc/hostname
	I0731 20:27:55.941292 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887-m03
	
	I0731 20:27:55.941322 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:55.944171 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.944532 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:55.944579 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:55.944724 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:55.944953 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.945134 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:55.945268 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:55.945458 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:55.945626 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:55.945642 1111910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430887-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430887-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430887-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:27:56.052176 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:27:56.052206 1111910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:27:56.052231 1111910 buildroot.go:174] setting up certificates
	I0731 20:27:56.052241 1111910 provision.go:84] configureAuth start
	I0731 20:27:56.052252 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetMachineName
	I0731 20:27:56.052539 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:27:56.055307 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.055713 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.055742 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.055895 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.058168 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.058509 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.058539 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.058666 1111910 provision.go:143] copyHostCerts
	I0731 20:27:56.058702 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:27:56.058739 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:27:56.058749 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:27:56.058835 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:27:56.058911 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:27:56.058929 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:27:56.058937 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:27:56.058960 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:27:56.059054 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:27:56.059080 1111910 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:27:56.059088 1111910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:27:56.059128 1111910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:27:56.059202 1111910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.ha-430887-m03 san=[127.0.0.1 192.168.39.44 ha-430887-m03 localhost minikube]
	I0731 20:27:56.128693 1111910 provision.go:177] copyRemoteCerts
	I0731 20:27:56.128774 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:27:56.128811 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.131590 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.132002 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.132027 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.132235 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.132386 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.132497 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.132600 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:27:56.210027 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:27:56.210117 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:27:56.234161 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:27:56.234258 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 20:27:56.257807 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:27:56.257898 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:27:56.279968 1111910 provision.go:87] duration metric: took 227.707935ms to configureAuth
	I0731 20:27:56.280007 1111910 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:27:56.280328 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:27:56.280442 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.283580 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.284020 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.284052 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.284280 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.284547 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.284743 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.284916 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.285191 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:56.285378 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:56.285399 1111910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:27:56.530667 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:27:56.530701 1111910 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:27:56.530712 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetURL
	I0731 20:27:56.532410 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | Using libvirt version 6000000
	I0731 20:27:56.535092 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.535502 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.535536 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.535713 1111910 main.go:141] libmachine: Docker is up and running!
	I0731 20:27:56.535725 1111910 main.go:141] libmachine: Reticulating splines...
	I0731 20:27:56.535731 1111910 client.go:171] duration metric: took 24.542033072s to LocalClient.Create
	I0731 20:27:56.535758 1111910 start.go:167] duration metric: took 24.542097631s to libmachine.API.Create "ha-430887"
	I0731 20:27:56.535771 1111910 start.go:293] postStartSetup for "ha-430887-m03" (driver="kvm2")
	I0731 20:27:56.535785 1111910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:27:56.535810 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.536131 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:27:56.536159 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.538554 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.538957 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.538990 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.539199 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.539379 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.539519 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.539645 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:27:56.618288 1111910 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:27:56.622369 1111910 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:27:56.622403 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:27:56.622470 1111910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:27:56.622557 1111910 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 20:27:56.622575 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 20:27:56.622696 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:27:56.631574 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:27:56.656215 1111910 start.go:296] duration metric: took 120.426549ms for postStartSetup
	I0731 20:27:56.656287 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetConfigRaw
	I0731 20:27:56.656987 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:27:56.659613 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.660171 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.660202 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.660490 1111910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:27:56.660690 1111910 start.go:128] duration metric: took 24.685929924s to createHost
	I0731 20:27:56.660718 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.663033 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.663416 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.663445 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.663595 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.663818 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.664005 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.664154 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.664307 1111910 main.go:141] libmachine: Using SSH client type: native
	I0731 20:27:56.664511 1111910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I0731 20:27:56.664522 1111910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:27:56.764455 1111910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722457676.741598635
	
	I0731 20:27:56.764478 1111910 fix.go:216] guest clock: 1722457676.741598635
	I0731 20:27:56.764498 1111910 fix.go:229] Guest: 2024-07-31 20:27:56.741598635 +0000 UTC Remote: 2024-07-31 20:27:56.660703552 +0000 UTC m=+157.784857276 (delta=80.895083ms)
	I0731 20:27:56.764521 1111910 fix.go:200] guest clock delta is within tolerance: 80.895083ms
	I0731 20:27:56.764528 1111910 start.go:83] releasing machines lock for "ha-430887-m03", held for 24.789935728s
	I0731 20:27:56.764554 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.764861 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:27:56.767477 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.767875 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.767906 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.770121 1111910 out.go:177] * Found network options:
	I0731 20:27:56.771478 1111910 out.go:177]   - NO_PROXY=192.168.39.195,192.168.39.149
	W0731 20:27:56.772541 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 20:27:56.772577 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 20:27:56.772597 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.773107 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.773299 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:27:56.773408 1111910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:27:56.773445 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	W0731 20:27:56.773537 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 20:27:56.773561 1111910 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 20:27:56.773616 1111910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:27:56.773634 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:27:56.776404 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.776473 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.776815 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.776838 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.776867 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:56.776887 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:56.776981 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.777091 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:27:56.777178 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.777354 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.777368 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:27:56.777542 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:27:56.777561 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:27:56.777708 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:27:57.006577 1111910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:27:57.012469 1111910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:27:57.012545 1111910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:27:57.028264 1111910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:27:57.028289 1111910 start.go:495] detecting cgroup driver to use...
	I0731 20:27:57.028367 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:27:57.043635 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:27:57.056608 1111910 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:27:57.056683 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:27:57.069906 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:27:57.082502 1111910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:27:57.197561 1111910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:27:57.331946 1111910 docker.go:233] disabling docker service ...
	I0731 20:27:57.332028 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:27:57.346408 1111910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:27:57.358495 1111910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:27:57.500031 1111910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:27:57.620301 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:27:57.633465 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:27:57.650241 1111910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:27:57.650304 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.660892 1111910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:27:57.660999 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.670938 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.681757 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.691993 1111910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:27:57.702682 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.713001 1111910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.729298 1111910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:27:57.740962 1111910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:27:57.749983 1111910 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:27:57.750050 1111910 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:27:57.761442 1111910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:27:57.770383 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:27:57.901256 1111910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:27:58.031043 1111910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:27:58.031132 1111910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:27:58.036217 1111910 start.go:563] Will wait 60s for crictl version
	I0731 20:27:58.036297 1111910 ssh_runner.go:195] Run: which crictl
	I0731 20:27:58.039857 1111910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:27:58.073700 1111910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:27:58.073791 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:27:58.101707 1111910 ssh_runner.go:195] Run: crio --version
	I0731 20:27:58.132748 1111910 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:27:58.133969 1111910 out.go:177]   - env NO_PROXY=192.168.39.195
	I0731 20:27:58.135216 1111910 out.go:177]   - env NO_PROXY=192.168.39.195,192.168.39.149
	I0731 20:27:58.136283 1111910 main.go:141] libmachine: (ha-430887-m03) Calling .GetIP
	I0731 20:27:58.139221 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:58.139646 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:27:58.139674 1111910 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:27:58.139919 1111910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:27:58.143957 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:27:58.155511 1111910 mustload.go:65] Loading cluster: ha-430887
	I0731 20:27:58.155771 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:27:58.156070 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:27:58.156132 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:27:58.170988 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0731 20:27:58.171503 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:27:58.171986 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:27:58.172008 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:27:58.172351 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:27:58.172565 1111910 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:27:58.174227 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:27:58.174543 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:27:58.174589 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:27:58.190061 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43215
	I0731 20:27:58.190699 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:27:58.191291 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:27:58.191320 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:27:58.191701 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:27:58.191891 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:27:58.192069 1111910 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887 for IP: 192.168.39.44
	I0731 20:27:58.192083 1111910 certs.go:194] generating shared ca certs ...
	I0731 20:27:58.192120 1111910 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:27:58.192284 1111910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:27:58.192341 1111910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:27:58.192357 1111910 certs.go:256] generating profile certs ...
	I0731 20:27:58.192454 1111910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key
	I0731 20:27:58.192483 1111910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.f307f416
	I0731 20:27:58.192504 1111910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.f307f416 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.149 192.168.39.44 192.168.39.254]
	I0731 20:27:58.349602 1111910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.f307f416 ...
	I0731 20:27:58.349639 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.f307f416: {Name:mk04931c2e9aad5b0d132e036e10941af8973c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:27:58.349824 1111910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.f307f416 ...
	I0731 20:27:58.349839 1111910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.f307f416: {Name:mka66a23f9bd02ebe6126d22c4955d8613c8bef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:27:58.349908 1111910 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.f307f416 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt
	I0731 20:27:58.350027 1111910 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.f307f416 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key
	I0731 20:27:58.350163 1111910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key
	I0731 20:27:58.350180 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:27:58.350193 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:27:58.350206 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:27:58.350219 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:27:58.350231 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:27:58.350244 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:27:58.350256 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:27:58.350268 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:27:58.350318 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 20:27:58.350347 1111910 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 20:27:58.350357 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:27:58.350380 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:27:58.350401 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:27:58.350424 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:27:58.350467 1111910 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:27:58.350493 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 20:27:58.350509 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:27:58.350524 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 20:27:58.350575 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:27:58.353482 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:27:58.353869 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:27:58.353897 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:27:58.354119 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:27:58.354351 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:27:58.354518 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:27:58.354661 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:27:58.428500 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 20:27:58.433376 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 20:27:58.448961 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 20:27:58.453144 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 20:27:58.467092 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 20:27:58.473427 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 20:27:58.482991 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 20:27:58.487189 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 20:27:58.497232 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 20:27:58.501529 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 20:27:58.511687 1111910 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 20:27:58.515560 1111910 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 20:27:58.525780 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:27:58.549881 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:27:58.575035 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:27:58.599186 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:27:58.622975 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 20:27:58.645305 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:27:58.668151 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:27:58.690077 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:27:58.712675 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 20:27:58.735426 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:27:58.758296 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 20:27:58.780077 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 20:27:58.795057 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 20:27:58.810126 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 20:27:58.824954 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 20:27:58.840504 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 20:27:58.856141 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 20:27:58.871975 1111910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 20:27:58.887214 1111910 ssh_runner.go:195] Run: openssl version
	I0731 20:27:58.892770 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 20:27:58.902504 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 20:27:58.906561 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 20:27:58.906612 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 20:27:58.911994 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:27:58.921848 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:27:58.931592 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:27:58.935659 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:27:58.935724 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:27:58.941067 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:27:58.952263 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 20:27:58.961749 1111910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 20:27:58.965702 1111910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 20:27:58.965752 1111910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 20:27:58.971258 1111910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 20:27:58.981081 1111910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:27:58.984988 1111910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:27:58.985040 1111910 kubeadm.go:934] updating node {m03 192.168.39.44 8443 v1.30.3 crio true true} ...
	I0731 20:27:58.985146 1111910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-430887-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:27:58.985185 1111910 kube-vip.go:115] generating kube-vip config ...
	I0731 20:27:58.985229 1111910 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 20:27:58.999089 1111910 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 20:27:58.999171 1111910 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 20:27:58.999232 1111910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:27:59.008764 1111910 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 20:27:59.008845 1111910 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 20:27:59.018072 1111910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 20:27:59.018093 1111910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 20:27:59.018101 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 20:27:59.018147 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:27:59.018163 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 20:27:59.018076 1111910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 20:27:59.018198 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 20:27:59.018278 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 20:27:59.032183 1111910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 20:27:59.032233 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 20:27:59.032262 1111910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 20:27:59.032264 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 20:27:59.032309 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 20:27:59.032340 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 20:27:59.056582 1111910 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 20:27:59.056618 1111910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 20:27:59.900501 1111910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 20:27:59.910098 1111910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 20:27:59.926467 1111910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:27:59.941870 1111910 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 20:27:59.958754 1111910 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 20:27:59.962670 1111910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:27:59.975556 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:28:00.099632 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:28:00.127831 1111910 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:28:00.128284 1111910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:28:00.128335 1111910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:28:00.144945 1111910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0731 20:28:00.145386 1111910 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:28:00.145978 1111910 main.go:141] libmachine: Using API Version  1
	I0731 20:28:00.146008 1111910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:28:00.146482 1111910 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:28:00.146721 1111910 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:28:00.146914 1111910 start.go:317] joinCluster: &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:28:00.147064 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 20:28:00.147082 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:28:00.149943 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:28:00.150296 1111910 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:28:00.150323 1111910 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:28:00.150515 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:28:00.150687 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:28:00.150828 1111910 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:28:00.150992 1111910 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:28:00.313550 1111910 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:28:00.313622 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ctfq0t.spxvt2sjrrhnv26x --discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-430887-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443"
	I0731 20:28:21.293546 1111910 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ctfq0t.spxvt2sjrrhnv26x --discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-430887-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443": (20.979881723s)
	I0731 20:28:21.293590 1111910 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 20:28:21.865744 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-430887-m03 minikube.k8s.io/updated_at=2024_07_31T20_28_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=ha-430887 minikube.k8s.io/primary=false
	I0731 20:28:21.977275 1111910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-430887-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 20:28:22.077506 1111910 start.go:319] duration metric: took 21.93058496s to joinCluster
	I0731 20:28:22.077606 1111910 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:28:22.077966 1111910 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:28:22.079109 1111910 out.go:177] * Verifying Kubernetes components...
	I0731 20:28:22.080547 1111910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:28:22.327292 1111910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:28:22.357460 1111910 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:28:22.357851 1111910 kapi.go:59] client config for ha-430887: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.crt", KeyFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key", CAFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 20:28:22.357922 1111910 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.195:8443
	I0731 20:28:22.358185 1111910 node_ready.go:35] waiting up to 6m0s for node "ha-430887-m03" to be "Ready" ...
	I0731 20:28:22.358270 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:22.358278 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:22.358285 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:22.358289 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:22.361629 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:22.859113 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:22.859140 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:22.859151 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:22.859155 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:22.862632 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:23.358947 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:23.358975 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:23.358988 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:23.358996 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:23.369478 1111910 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0731 20:28:23.858780 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:23.858805 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:23.858814 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:23.858824 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:23.862357 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:24.359056 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:24.359081 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:24.359091 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:24.359095 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:24.362149 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:24.362753 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:24.858482 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:24.858508 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:24.858517 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:24.858521 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:24.862014 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:25.358524 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:25.358549 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:25.358559 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:25.358564 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:25.362085 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:25.859049 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:25.859078 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:25.859089 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:25.859098 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:25.862718 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:26.358553 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:26.358591 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:26.358603 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:26.358611 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:26.362273 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:26.362829 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:26.858778 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:26.858816 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:26.858825 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:26.858829 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:26.862054 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:27.359228 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:27.359255 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:27.359267 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:27.359273 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:27.363032 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:27.859079 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:27.859105 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:27.859115 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:27.859119 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:27.862239 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:28.359250 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:28.359273 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:28.359281 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:28.359287 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:28.362369 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:28.363031 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:28.859429 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:28.859453 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:28.859461 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:28.859465 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:28.862564 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:29.358482 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:29.358505 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:29.358514 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:29.358519 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:29.362340 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:29.858927 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:29.858957 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:29.858968 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:29.858973 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:29.863066 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:28:30.358362 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:30.358386 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:30.358394 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:30.358399 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:30.361551 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:30.859207 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:30.859233 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:30.859245 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:30.859249 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:30.862848 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:30.863830 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:31.359206 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:31.359230 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:31.359238 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:31.359241 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:31.362446 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:31.858555 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:31.858580 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:31.858588 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:31.858592 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:31.861681 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:32.359086 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:32.359117 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:32.359130 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:32.359136 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:32.362580 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:32.858962 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:32.858987 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:32.858996 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:32.858999 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:32.862340 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:33.359108 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:33.359131 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:33.359139 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:33.359144 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:33.362276 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:33.362878 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:33.859249 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:33.859274 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:33.859283 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:33.859287 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:33.862484 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:34.358822 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:34.358850 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:34.358861 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:34.358866 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:34.361891 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:34.859317 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:34.859341 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:34.859354 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:34.859359 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:34.862595 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:35.358965 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:35.358990 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:35.359000 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:35.359006 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:35.362626 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:35.363173 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:35.858529 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:35.858553 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:35.858563 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:35.858566 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:35.861484 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:36.359392 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:36.359418 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:36.359429 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:36.359439 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:36.362733 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:36.858596 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:36.858630 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:36.858647 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:36.858651 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:36.861987 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:37.358480 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:37.358504 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:37.358513 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:37.358516 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:37.361258 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:37.858784 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:37.858807 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:37.858815 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:37.858820 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:37.861857 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:37.862477 1111910 node_ready.go:53] node "ha-430887-m03" has status "Ready":"False"
	I0731 20:28:38.358761 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:38.358790 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:38.358806 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:38.358811 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:38.361998 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:38.858750 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:38.858771 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:38.858780 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:38.858785 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:38.861825 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:39.358453 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:39.358477 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.358485 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.358489 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.361356 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.361973 1111910 node_ready.go:49] node "ha-430887-m03" has status "Ready":"True"
	I0731 20:28:39.361993 1111910 node_ready.go:38] duration metric: took 17.003792582s for node "ha-430887-m03" to be "Ready" ...
	I0731 20:28:39.362002 1111910 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:28:39.362058 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:39.362070 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.362077 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.362082 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.367821 1111910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 20:28:39.373864 1111910 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.373959 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rhlnq
	I0731 20:28:39.373969 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.373979 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.373985 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.376589 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.377432 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:39.377446 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.377456 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.377462 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.379891 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.380404 1111910 pod_ready.go:92] pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.380420 1111910 pod_ready.go:81] duration metric: took 6.531856ms for pod "coredns-7db6d8ff4d-rhlnq" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.380430 1111910 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.380479 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tkm49
	I0731 20:28:39.380488 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.380497 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.380503 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.383100 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.383946 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:39.383957 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.383965 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.383970 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.386466 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.387082 1111910 pod_ready.go:92] pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.387097 1111910 pod_ready.go:81] duration metric: took 6.65916ms for pod "coredns-7db6d8ff4d-tkm49" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.387107 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.387157 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887
	I0731 20:28:39.387166 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.387176 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.387183 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.389871 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.390545 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:39.390559 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.390569 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.390573 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.392729 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.393259 1111910 pod_ready.go:92] pod "etcd-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.393278 1111910 pod_ready.go:81] duration metric: took 6.163758ms for pod "etcd-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.393286 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.393328 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887-m02
	I0731 20:28:39.393335 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.393342 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.393346 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.395308 1111910 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 20:28:39.395912 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:39.395928 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.395937 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.395945 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.398209 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.398642 1111910 pod_ready.go:92] pod "etcd-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.398658 1111910 pod_ready.go:81] duration metric: took 5.366532ms for pod "etcd-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.398664 1111910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.559071 1111910 request.go:629] Waited for 160.328596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887-m03
	I0731 20:28:39.559141 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-430887-m03
	I0731 20:28:39.559153 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.559165 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.559174 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.566687 1111910 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 20:28:39.758667 1111910 request.go:629] Waited for 191.268528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:39.758747 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:39.758755 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.758762 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.758766 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.761660 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:39.762098 1111910 pod_ready.go:92] pod "etcd-ha-430887-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:39.762117 1111910 pod_ready.go:81] duration metric: took 363.447423ms for pod "etcd-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.762136 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:39.959161 1111910 request.go:629] Waited for 196.944378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887
	I0731 20:28:39.959257 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887
	I0731 20:28:39.959269 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:39.959280 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:39.959287 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:39.962156 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:40.159092 1111910 request.go:629] Waited for 196.180513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:40.159158 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:40.159165 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.159184 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.159193 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.161538 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:40.162046 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:40.162067 1111910 pod_ready.go:81] duration metric: took 399.922435ms for pod "kube-apiserver-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.162076 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.359140 1111910 request.go:629] Waited for 196.970446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m02
	I0731 20:28:40.359207 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m02
	I0731 20:28:40.359212 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.359220 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.359224 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.362223 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:40.559445 1111910 request.go:629] Waited for 196.359268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:40.559517 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:40.559522 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.559530 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.559534 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.562189 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:40.562886 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:40.562904 1111910 pod_ready.go:81] duration metric: took 400.82189ms for pod "kube-apiserver-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.562914 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.759081 1111910 request.go:629] Waited for 196.073598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m03
	I0731 20:28:40.759149 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-430887-m03
	I0731 20:28:40.759154 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.759162 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.759166 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.762227 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:40.958936 1111910 request.go:629] Waited for 195.897311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:40.958998 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:40.959003 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:40.959010 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:40.959014 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:40.963309 1111910 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 20:28:40.963874 1111910 pod_ready.go:92] pod "kube-apiserver-ha-430887-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:40.963897 1111910 pod_ready.go:81] duration metric: took 400.97635ms for pod "kube-apiserver-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:40.963911 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.159025 1111910 request.go:629] Waited for 195.001102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887
	I0731 20:28:41.159112 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887
	I0731 20:28:41.159123 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.159136 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.159145 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.162347 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:41.359414 1111910 request.go:629] Waited for 196.355123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:41.359478 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:41.359483 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.359491 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.359497 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.362550 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:41.363184 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:41.363210 1111910 pod_ready.go:81] duration metric: took 399.290567ms for pod "kube-controller-manager-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.363224 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.559165 1111910 request.go:629] Waited for 195.855971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m02
	I0731 20:28:41.559242 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m02
	I0731 20:28:41.559249 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.559280 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.559290 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.562726 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:41.758604 1111910 request.go:629] Waited for 195.285279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:41.758662 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:41.758667 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.758674 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.758680 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.761946 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:41.762411 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:41.762430 1111910 pod_ready.go:81] duration metric: took 399.194377ms for pod "kube-controller-manager-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.762442 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:41.958503 1111910 request.go:629] Waited for 195.955663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m03
	I0731 20:28:41.958579 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-430887-m03
	I0731 20:28:41.958593 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:41.958605 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:41.958610 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:41.961769 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:42.158872 1111910 request.go:629] Waited for 196.233217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:42.158983 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:42.158998 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.159009 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.159016 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.162046 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:42.162544 1111910 pod_ready.go:92] pod "kube-controller-manager-ha-430887-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:42.162565 1111910 pod_ready.go:81] duration metric: took 400.114992ms for pod "kube-controller-manager-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.162576 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4mft2" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.358555 1111910 request.go:629] Waited for 195.898783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4mft2
	I0731 20:28:42.358658 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4mft2
	I0731 20:28:42.358666 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.358677 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.358687 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.361987 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:42.559084 1111910 request.go:629] Waited for 196.369786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:42.559156 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:42.559163 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.559177 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.559187 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.562430 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:42.563106 1111910 pod_ready.go:92] pod "kube-proxy-4mft2" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:42.563127 1111910 pod_ready.go:81] duration metric: took 400.544805ms for pod "kube-proxy-4mft2" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.563136 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hsd92" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.758761 1111910 request.go:629] Waited for 195.545293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsd92
	I0731 20:28:42.758848 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hsd92
	I0731 20:28:42.758857 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.758865 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.758869 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.761796 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:42.958762 1111910 request.go:629] Waited for 196.362949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:42.958826 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:42.958833 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:42.958843 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:42.958849 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:42.961643 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:42.962375 1111910 pod_ready.go:92] pod "kube-proxy-hsd92" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:42.962393 1111910 pod_ready.go:81] duration metric: took 399.250667ms for pod "kube-proxy-hsd92" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:42.962402 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m49fz" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.159491 1111910 request.go:629] Waited for 197.008184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m49fz
	I0731 20:28:43.159552 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m49fz
	I0731 20:28:43.159558 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.159570 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.159576 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.162318 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:43.358756 1111910 request.go:629] Waited for 195.744589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:43.358829 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:43.358836 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.358846 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.358864 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.361790 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:43.362386 1111910 pod_ready.go:92] pod "kube-proxy-m49fz" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:43.362405 1111910 pod_ready.go:81] duration metric: took 399.995104ms for pod "kube-proxy-m49fz" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.362416 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.558456 1111910 request.go:629] Waited for 195.959944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887
	I0731 20:28:43.558535 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887
	I0731 20:28:43.558540 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.558548 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.558555 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.561185 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:43.759081 1111910 request.go:629] Waited for 197.361763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:43.759162 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887
	I0731 20:28:43.759170 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.759179 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.759187 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.762461 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:43.763045 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:43.763066 1111910 pod_ready.go:81] duration metric: took 400.638758ms for pod "kube-scheduler-ha-430887" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.763075 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:43.959299 1111910 request.go:629] Waited for 196.129029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m02
	I0731 20:28:43.959381 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m02
	I0731 20:28:43.959392 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:43.959403 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:43.959416 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:43.962385 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:44.158486 1111910 request.go:629] Waited for 195.365632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:44.158681 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m02
	I0731 20:28:44.158704 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.158713 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.158719 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.161674 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:44.162410 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:44.162427 1111910 pod_ready.go:81] duration metric: took 399.345789ms for pod "kube-scheduler-ha-430887-m02" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:44.162436 1111910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:44.358504 1111910 request.go:629] Waited for 196.003111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m03
	I0731 20:28:44.358592 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-430887-m03
	I0731 20:28:44.358599 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.358607 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.358614 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.361421 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:44.559427 1111910 request.go:629] Waited for 197.353126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:44.559489 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-430887-m03
	I0731 20:28:44.559494 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.559501 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.559505 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.563330 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:44.563981 1111910 pod_ready.go:92] pod "kube-scheduler-ha-430887-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 20:28:44.564001 1111910 pod_ready.go:81] duration metric: took 401.558982ms for pod "kube-scheduler-ha-430887-m03" in "kube-system" namespace to be "Ready" ...
	I0731 20:28:44.564013 1111910 pod_ready.go:38] duration metric: took 5.202000853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:28:44.564046 1111910 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:28:44.564138 1111910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:28:44.577774 1111910 api_server.go:72] duration metric: took 22.500125287s to wait for apiserver process to appear ...
	I0731 20:28:44.577801 1111910 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:28:44.577826 1111910 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0731 20:28:44.582020 1111910 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0731 20:28:44.582104 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/version
	I0731 20:28:44.582114 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.582122 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.582128 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.583002 1111910 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 20:28:44.583078 1111910 api_server.go:141] control plane version: v1.30.3
	I0731 20:28:44.583095 1111910 api_server.go:131] duration metric: took 5.287222ms to wait for apiserver health ...
	I0731 20:28:44.583102 1111910 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:28:44.758754 1111910 request.go:629] Waited for 175.571394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:44.758842 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:44.758850 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.758858 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.758864 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.765473 1111910 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 20:28:44.772327 1111910 system_pods.go:59] 24 kube-system pods found
	I0731 20:28:44.772366 1111910 system_pods.go:61] "coredns-7db6d8ff4d-rhlnq" [3a333762-0e0a-4a9a-bede-b6cf8a2b221c] Running
	I0731 20:28:44.772372 1111910 system_pods.go:61] "coredns-7db6d8ff4d-tkm49" [5c751586-1fd3-4ebc-8d3f-602f3a70c3ac] Running
	I0731 20:28:44.772377 1111910 system_pods.go:61] "etcd-ha-430887" [c1505419-fc9a-442e-99a0-ba065faa840f] Running
	I0731 20:28:44.772382 1111910 system_pods.go:61] "etcd-ha-430887-m02" [51a3c519-0fab-4340-a484-8d382bec8c4f] Running
	I0731 20:28:44.772389 1111910 system_pods.go:61] "etcd-ha-430887-m03" [6d37da19-a94f-4068-9dd2-580c67d223d5] Running
	I0731 20:28:44.772394 1111910 system_pods.go:61] "kindnet-49h86" [5e5b0c1c-ff0c-422c-9d94-a0142fd2d4d5] Running
	I0731 20:28:44.772399 1111910 system_pods.go:61] "kindnet-fbt5h" [42db9e05-a780-4945-a413-98fa5832c8d7] Running
	I0731 20:28:44.772404 1111910 system_pods.go:61] "kindnet-xmjzn" [13a3055d-bcf0-472f-b9f6-787e6f4499cb] Running
	I0731 20:28:44.772409 1111910 system_pods.go:61] "kube-apiserver-ha-430887" [602c04df-b310-4bca-8960-8d24c59e2919] Running
	I0731 20:28:44.772414 1111910 system_pods.go:61] "kube-apiserver-ha-430887-m02" [8e0b7edc-d079-4d14-81ee-5b2ab37239c6] Running
	I0731 20:28:44.772420 1111910 system_pods.go:61] "kube-apiserver-ha-430887-m03" [7f79c842-b83a-4eae-96c2-b6defb36ed65] Running
	I0731 20:28:44.772433 1111910 system_pods.go:61] "kube-controller-manager-ha-430887" [682793cf-2b76-4483-9926-1733c17c09cc] Running
	I0731 20:28:44.772438 1111910 system_pods.go:61] "kube-controller-manager-ha-430887-m02" [183243c7-be52-4c3d-b41b-cf6eefc1c669] Running
	I0731 20:28:44.772447 1111910 system_pods.go:61] "kube-controller-manager-ha-430887-m03" [69f7ba2e-3b34-4797-b09e-05e82d37f656] Running
	I0731 20:28:44.772452 1111910 system_pods.go:61] "kube-proxy-4mft2" [71207460-fab2-4bf0-bfa6-180878539386] Running
	I0731 20:28:44.772455 1111910 system_pods.go:61] "kube-proxy-hsd92" [9ec64df5-ccc0-4927-87e0-819d66291037] Running
	I0731 20:28:44.772459 1111910 system_pods.go:61] "kube-proxy-m49fz" [6686467c-0177-47b5-a286-cf718c901436] Running
	I0731 20:28:44.772463 1111910 system_pods.go:61] "kube-scheduler-ha-430887" [3c22927a-2760-49ae-9aea-2f09194581c2] Running
	I0731 20:28:44.772467 1111910 system_pods.go:61] "kube-scheduler-ha-430887-m02" [23a00525-1647-44bc-abfa-5e6db2131442] Running
	I0731 20:28:44.772473 1111910 system_pods.go:61] "kube-scheduler-ha-430887-m03" [082e5224-ffd5-4ecb-a103-7a1901f29709] Running
	I0731 20:28:44.772476 1111910 system_pods.go:61] "kube-vip-ha-430887" [516521a0-b217-407d-90ee-917c6cb6991a] Running
	I0731 20:28:44.772480 1111910 system_pods.go:61] "kube-vip-ha-430887-m02" [421d15be-6980-4c04-b2bc-05ed559f2f2e] Running
	I0731 20:28:44.772486 1111910 system_pods.go:61] "kube-vip-ha-430887-m03" [53aeb41f-2430-4e51-9563-1878009bad9b] Running
	I0731 20:28:44.772491 1111910 system_pods.go:61] "storage-provisioner" [1eb16097-a994-4b42-b876-ebe7d6022be6] Running
	I0731 20:28:44.772497 1111910 system_pods.go:74] duration metric: took 189.381772ms to wait for pod list to return data ...
	I0731 20:28:44.772507 1111910 default_sa.go:34] waiting for default service account to be created ...
	I0731 20:28:44.958956 1111910 request.go:629] Waited for 186.368147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0731 20:28:44.959020 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0731 20:28:44.959026 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:44.959034 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:44.959038 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:44.961922 1111910 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 20:28:44.962056 1111910 default_sa.go:45] found service account: "default"
	I0731 20:28:44.962071 1111910 default_sa.go:55] duration metric: took 189.558174ms for default service account to be created ...
	I0731 20:28:44.962079 1111910 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 20:28:45.158659 1111910 request.go:629] Waited for 196.4985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:45.158722 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0731 20:28:45.158730 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:45.158737 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:45.158741 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:45.164683 1111910 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 20:28:45.172082 1111910 system_pods.go:86] 24 kube-system pods found
	I0731 20:28:45.172126 1111910 system_pods.go:89] "coredns-7db6d8ff4d-rhlnq" [3a333762-0e0a-4a9a-bede-b6cf8a2b221c] Running
	I0731 20:28:45.172135 1111910 system_pods.go:89] "coredns-7db6d8ff4d-tkm49" [5c751586-1fd3-4ebc-8d3f-602f3a70c3ac] Running
	I0731 20:28:45.172141 1111910 system_pods.go:89] "etcd-ha-430887" [c1505419-fc9a-442e-99a0-ba065faa840f] Running
	I0731 20:28:45.172148 1111910 system_pods.go:89] "etcd-ha-430887-m02" [51a3c519-0fab-4340-a484-8d382bec8c4f] Running
	I0731 20:28:45.172154 1111910 system_pods.go:89] "etcd-ha-430887-m03" [6d37da19-a94f-4068-9dd2-580c67d223d5] Running
	I0731 20:28:45.172160 1111910 system_pods.go:89] "kindnet-49h86" [5e5b0c1c-ff0c-422c-9d94-a0142fd2d4d5] Running
	I0731 20:28:45.172171 1111910 system_pods.go:89] "kindnet-fbt5h" [42db9e05-a780-4945-a413-98fa5832c8d7] Running
	I0731 20:28:45.172177 1111910 system_pods.go:89] "kindnet-xmjzn" [13a3055d-bcf0-472f-b9f6-787e6f4499cb] Running
	I0731 20:28:45.172187 1111910 system_pods.go:89] "kube-apiserver-ha-430887" [602c04df-b310-4bca-8960-8d24c59e2919] Running
	I0731 20:28:45.172194 1111910 system_pods.go:89] "kube-apiserver-ha-430887-m02" [8e0b7edc-d079-4d14-81ee-5b2ab37239c6] Running
	I0731 20:28:45.172202 1111910 system_pods.go:89] "kube-apiserver-ha-430887-m03" [7f79c842-b83a-4eae-96c2-b6defb36ed65] Running
	I0731 20:28:45.172209 1111910 system_pods.go:89] "kube-controller-manager-ha-430887" [682793cf-2b76-4483-9926-1733c17c09cc] Running
	I0731 20:28:45.172221 1111910 system_pods.go:89] "kube-controller-manager-ha-430887-m02" [183243c7-be52-4c3d-b41b-cf6eefc1c669] Running
	I0731 20:28:45.172226 1111910 system_pods.go:89] "kube-controller-manager-ha-430887-m03" [69f7ba2e-3b34-4797-b09e-05e82d37f656] Running
	I0731 20:28:45.172230 1111910 system_pods.go:89] "kube-proxy-4mft2" [71207460-fab2-4bf0-bfa6-180878539386] Running
	I0731 20:28:45.172234 1111910 system_pods.go:89] "kube-proxy-hsd92" [9ec64df5-ccc0-4927-87e0-819d66291037] Running
	I0731 20:28:45.172238 1111910 system_pods.go:89] "kube-proxy-m49fz" [6686467c-0177-47b5-a286-cf718c901436] Running
	I0731 20:28:45.172245 1111910 system_pods.go:89] "kube-scheduler-ha-430887" [3c22927a-2760-49ae-9aea-2f09194581c2] Running
	I0731 20:28:45.172251 1111910 system_pods.go:89] "kube-scheduler-ha-430887-m02" [23a00525-1647-44bc-abfa-5e6db2131442] Running
	I0731 20:28:45.172256 1111910 system_pods.go:89] "kube-scheduler-ha-430887-m03" [082e5224-ffd5-4ecb-a103-7a1901f29709] Running
	I0731 20:28:45.172262 1111910 system_pods.go:89] "kube-vip-ha-430887" [516521a0-b217-407d-90ee-917c6cb6991a] Running
	I0731 20:28:45.172267 1111910 system_pods.go:89] "kube-vip-ha-430887-m02" [421d15be-6980-4c04-b2bc-05ed559f2f2e] Running
	I0731 20:28:45.172272 1111910 system_pods.go:89] "kube-vip-ha-430887-m03" [53aeb41f-2430-4e51-9563-1878009bad9b] Running
	I0731 20:28:45.172276 1111910 system_pods.go:89] "storage-provisioner" [1eb16097-a994-4b42-b876-ebe7d6022be6] Running
	I0731 20:28:45.172285 1111910 system_pods.go:126] duration metric: took 210.199281ms to wait for k8s-apps to be running ...
	I0731 20:28:45.172299 1111910 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 20:28:45.172357 1111910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:28:45.186785 1111910 system_svc.go:56] duration metric: took 14.479473ms WaitForService to wait for kubelet
	I0731 20:28:45.186815 1111910 kubeadm.go:582] duration metric: took 23.109172519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:28:45.186854 1111910 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:28:45.359282 1111910 request.go:629] Waited for 172.337834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes
	I0731 20:28:45.359353 1111910 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes
	I0731 20:28:45.359366 1111910 round_trippers.go:469] Request Headers:
	I0731 20:28:45.359374 1111910 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 20:28:45.359383 1111910 round_trippers.go:473]     Accept: application/json, */*
	I0731 20:28:45.362765 1111910 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 20:28:45.363961 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:28:45.363980 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:28:45.363997 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:28:45.364004 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:28:45.364009 1111910 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:28:45.364014 1111910 node_conditions.go:123] node cpu capacity is 2
	I0731 20:28:45.364019 1111910 node_conditions.go:105] duration metric: took 177.156109ms to run NodePressure ...
	I0731 20:28:45.364036 1111910 start.go:241] waiting for startup goroutines ...
	I0731 20:28:45.364061 1111910 start.go:255] writing updated cluster config ...
	I0731 20:28:45.364390 1111910 ssh_runner.go:195] Run: rm -f paused
	I0731 20:28:45.415388 1111910 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 20:28:45.417430 1111910 out.go:177] * Done! kubectl is now configured to use "ha-430887" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.442610759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30b280c4-0238-4456-bfd0-ac5a599b1a00 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.443573307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85404d50-1916-41eb-9e37-62f12486eb03 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.444063105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457996444017675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85404d50-1916-41eb-9e37-62f12486eb03 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.444625316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fdd0ed3-a3fe-4a5b-a88d-edfe64909d59 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.444732818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fdd0ed3-a3fe-4a5b-a88d-edfe64909d59 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.444988637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457728387807943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587826789494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9,PodSandboxId:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457587771759756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587459292874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e
0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722457575608771403,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245757
2328090522,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94,PodSandboxId:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224575527
39795460,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a,PodSandboxId:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457550320375310,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457550283002451,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457550254458449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0,PodSandboxId:a2c805cc2a87b3507f9aa8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457550230452492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fdd0ed3-a3fe-4a5b-a88d-edfe64909d59 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.453835039Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ce98f8a-2a29-4e00-9450-01549dfc438a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.454114148Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-tkmzn,Uid:b668a1b0-4434-4037-a0a1-0461e748521d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722457726582855460,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:28:46.274201435Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1eb16097-a994-4b42-b876-ebe7d6022be6,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1722457587647235870,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T20:26:27.036600697Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-tkm49,Uid:5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722457587639188553,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:26:27.032625573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rhlnq,Uid:3a333762-0e0a-4a9a-bede-b6cf8a2b221c,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722457587331599940,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:26:27.025782727Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&PodSandboxMetadata{Name:kube-proxy-m49fz,Uid:6686467c-0177-47b5-a286-cf718c901436,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722457572022128889,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-31T20:26:11.694848648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&PodSandboxMetadata{Name:kindnet-xmjzn,Uid:13a3055d-bcf0-472f-b9f6-787e6f4499cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722457571996561305,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:26:11.687647321Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-430887,Uid:2ba95cb3d7229e89f7742849cb28060a,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1722457550092827450,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{kubernetes.io/config.hash: 2ba95cb3d7229e89f7742849cb28060a,kubernetes.io/config.seen: 2024-07-31T20:25:49.619383463Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-430887,Uid:586dfd40543240aed00e0fd894b7ddbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722457550088566063,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apis
erver.advertise-address.endpoint: 192.168.39.195:8443,kubernetes.io/config.hash: 586dfd40543240aed00e0fd894b7ddbf,kubernetes.io/config.seen: 2024-07-31T20:25:49.619385668Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&PodSandboxMetadata{Name:etcd-ha-430887,Uid:2ff059524622ab33693d7a7d489e8add,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722457550085060834,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.195:2379,kubernetes.io/config.hash: 2ff059524622ab33693d7a7d489e8add,kubernetes.io/config.seen: 2024-07-31T20:25:49.619384571Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2c805cc2a87b3507f9a
a8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-430887,Uid:ea7dc3b82901d19393b1a5032c0de400,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722457550072713694,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea7dc3b82901d19393b1a5032c0de400,kubernetes.io/config.seen: 2024-07-31T20:25:49.619386670Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-430887,Uid:35257eb5487c079f33eba6618833709a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722457550068552666,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 35257eb5487c079f33eba6618833709a,kubernetes.io/config.seen: 2024-07-31T20:25:49.619379676Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1ce98f8a-2a29-4e00-9450-01549dfc438a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.455003001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c86ea91-23b2-4437-8604-a4c963d7c17e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.455089843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c86ea91-23b2-4437-8604-a4c963d7c17e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.455586969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457728387807943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587826789494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9,PodSandboxId:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457587771759756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587459292874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e
0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722457575608771403,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245757
2328090522,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94,PodSandboxId:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224575527
39795460,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a,PodSandboxId:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457550320375310,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457550283002451,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457550254458449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0,PodSandboxId:a2c805cc2a87b3507f9aa8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457550230452492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c86ea91-23b2-4437-8604-a4c963d7c17e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.487491618Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3a4f0ee-ceb2-45bb-81d0-221fb8645a9e name=/runtime.v1.RuntimeService/Version
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.487584616Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3a4f0ee-ceb2-45bb-81d0-221fb8645a9e name=/runtime.v1.RuntimeService/Version
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.488636653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90509456-fd67-4249-b27f-ae153dd950ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.489126482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457996489102021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90509456-fd67-4249-b27f-ae153dd950ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.489628938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f23c6937-457f-450b-8dd9-105be43a85a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.489706800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f23c6937-457f-450b-8dd9-105be43a85a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.489948753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457728387807943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587826789494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9,PodSandboxId:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457587771759756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587459292874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e
0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722457575608771403,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245757
2328090522,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94,PodSandboxId:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224575527
39795460,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a,PodSandboxId:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457550320375310,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457550283002451,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457550254458449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0,PodSandboxId:a2c805cc2a87b3507f9aa8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457550230452492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f23c6937-457f-450b-8dd9-105be43a85a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.526613587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fef8183-2c0b-4f60-ac3b-8bccdeb7f1d3 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.526874175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fef8183-2c0b-4f60-ac3b-8bccdeb7f1d3 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.528458414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99ac08b4-d0a6-486c-a027-68a02c8363d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.529042442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457996529017052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99ac08b4-d0a6-486c-a027-68a02c8363d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.529667601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=106c38cc-ebfa-45d9-b1d9-5d8edd6a163b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.529721582Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=106c38cc-ebfa-45d9-b1d9-5d8edd6a163b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:33:16 ha-430887 crio[682]: time="2024-07-31 20:33:16.530105807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457728387807943,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587826789494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9,PodSandboxId:bf04533b742a02fcfc1f6d87de9f2ac2e1a2eba0d83a8b4211638c909b6278cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457587771759756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457587459292874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e
0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722457575608771403,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245757
2328090522,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94,PodSandboxId:4e13ff1bf83839441b34ff2b36e31d3093943ea1cda6f7a2d9071e8f53b694e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224575527
39795460,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba95cb3d7229e89f7742849cb28060a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a,PodSandboxId:fad3c90ca76709cb864b5f1b79b5284946dc7d8f71bd8ea05855205ce1705b20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457550320375310,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457550283002451,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457550254458449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0,PodSandboxId:a2c805cc2a87b3507f9aa8d2a4fb961c8412e0e01846065d50a5329b4b687b5a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457550230452492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=106c38cc-ebfa-45d9-b1d9-5d8edd6a163b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b61252be77d59       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   94749dc3b8a05       busybox-fc5497c4f-tkmzn
	6804a88577bb9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   364daaeb39b2a       coredns-7db6d8ff4d-tkm49
	431be4d60e882       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   bf04533b742a0       storage-provisioner
	a3a604ebae38f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   c5096ff8ccf93       coredns-7db6d8ff4d-rhlnq
	63366667a98d5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   75a5e3ddf89ae       kindnet-xmjzn
	2c3cfe9da185a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   45f974d9fa89f       kube-proxy-m49fz
	87bc5b4c15b86       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   4e13ff1bf8383       kube-vip-ha-430887
	03b10e7eedd37       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   fad3c90ca7670       kube-apiserver-ha-430887
	019dbd42b381f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   e2bba8d22a3ce       kube-scheduler-ha-430887
	5d05fc1d45725       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   9da4629d918d3       etcd-ha-430887
	31bfc4408c834       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   a2c805cc2a87b       kube-controller-manager-ha-430887
	
	
	==> coredns [6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02] <==
	[INFO] 10.244.1.2:40160 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003254124s
	[INFO] 10.244.0.4:52726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077926s
	[INFO] 10.244.0.4:47159 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.0000495s
	[INFO] 10.244.1.2:58934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121874s
	[INFO] 10.244.1.2:43600 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179937s
	[INFO] 10.244.1.2:51933 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175849s
	[INFO] 10.244.1.2:36619 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118307s
	[INFO] 10.244.2.2:51012 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102784s
	[INFO] 10.244.2.2:46299 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151507s
	[INFO] 10.244.2.2:32857 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075858s
	[INFO] 10.244.0.4:40942 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087643s
	[INFO] 10.244.0.4:34086 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001741525s
	[INFO] 10.244.0.4:52613 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051957s
	[INFO] 10.244.0.4:48069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001210819s
	[INFO] 10.244.1.2:57723 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084885s
	[INFO] 10.244.1.2:43800 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099387s
	[INFO] 10.244.2.2:48837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134956s
	[INFO] 10.244.2.2:46133 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008076s
	[INFO] 10.244.1.2:52179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123976s
	[INFO] 10.244.1.2:38064 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121703s
	[INFO] 10.244.2.2:38356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183387s
	[INFO] 10.244.2.2:45481 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000194275s
	[INFO] 10.244.2.2:42027 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138509s
	[INFO] 10.244.2.2:47364 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140763s
	[INFO] 10.244.0.4:57224 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075497s
	
	
	==> coredns [a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a] <==
	[INFO] 10.244.1.2:58003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160037s
	[INFO] 10.244.1.2:37096 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00351051s
	[INFO] 10.244.1.2:39762 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151696s
	[INFO] 10.244.2.2:49534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.2.2:60700 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001603178s
	[INFO] 10.244.2.2:47959 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001232076s
	[INFO] 10.244.2.2:48165 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000186379s
	[INFO] 10.244.2.2:37258 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102901s
	[INFO] 10.244.0.4:51406 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128128s
	[INFO] 10.244.0.4:52718 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122292s
	[INFO] 10.244.0.4:35814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077793s
	[INFO] 10.244.0.4:57174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050499s
	[INFO] 10.244.1.2:35721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152974s
	[INFO] 10.244.1.2:52365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099511s
	[INFO] 10.244.2.2:56276 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095649s
	[INFO] 10.244.2.2:33350 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089031s
	[INFO] 10.244.0.4:39526 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089609s
	[INFO] 10.244.0.4:32892 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000036988s
	[INFO] 10.244.0.4:54821 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028078s
	[INFO] 10.244.0.4:40693 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000023261s
	[INFO] 10.244.1.2:56760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130165s
	[INFO] 10.244.1.2:49192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109643s
	[INFO] 10.244.0.4:55943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117823s
	[INFO] 10.244.0.4:40806 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010301s
	[INFO] 10.244.0.4:50703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076201s
	
	
	==> describe nodes <==
	Name:               ha-430887
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_25_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:25:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:33:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:29:00 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:29:00 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:29:00 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:29:00 +0000   Wed, 31 Jul 2024 20:26:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-430887
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d983ecff48054665b7d9523d0704c9fc
	  System UUID:                d983ecff-4805-4665-b7d9-523d0704c9fc
	  Boot ID:                    713545a1-3d19-4194-8d69-3cd83a4e4967
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tkmzn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-7db6d8ff4d-rhlnq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m5s
	  kube-system                 coredns-7db6d8ff4d-tkm49             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m5s
	  kube-system                 etcd-ha-430887                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m21s
	  kube-system                 kindnet-xmjzn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m5s
	  kube-system                 kube-apiserver-ha-430887             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-controller-manager-ha-430887    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-proxy-m49fz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 kube-scheduler-ha-430887             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-vip-ha-430887                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m27s (x7 over 7m27s)  kubelet          Node ha-430887 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m27s (x8 over 7m27s)  kubelet          Node ha-430887 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m27s (x8 over 7m27s)  kubelet          Node ha-430887 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m20s                  kubelet          Node ha-430887 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s                  kubelet          Node ha-430887 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s                  kubelet          Node ha-430887 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal  NodeReady                6m50s                  kubelet          Node ha-430887 status is now: NodeReady
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	
	
	Name:               ha-430887-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_27_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:27:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:29:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 20:29:09 +0000   Wed, 31 Jul 2024 20:30:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 20:29:09 +0000   Wed, 31 Jul 2024 20:30:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 20:29:09 +0000   Wed, 31 Jul 2024 20:30:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 20:29:09 +0000   Wed, 31 Jul 2024 20:30:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-430887-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec9db720f1af4a7b8ddebc5f57826488
	  System UUID:                ec9db720-f1af-4a7b-8dde-bc5f57826488
	  Boot ID:                    97b08b0d-d235-4e8a-b4a7-e20b5af5885a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hhwcx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-430887-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m9s
	  kube-system                 kindnet-49h86                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m9s
	  kube-system                 kube-apiserver-ha-430887-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-controller-manager-ha-430887-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-proxy-hsd92                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-scheduler-ha-430887-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-vip-ha-430887-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m9s (x2 over 6m9s)  kubelet          Node ha-430887-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x2 over 6m9s)  kubelet          Node ha-430887-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x2 over 6m9s)  kubelet          Node ha-430887-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                 node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  RegisteredNode           5m53s                node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  NodeReady                5m49s                kubelet          Node ha-430887-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m40s                node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  NodeNotReady             2m46s                node-controller  Node ha-430887-m02 status is now: NodeNotReady
	
	
	Name:               ha-430887-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_28_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:33:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:28:49 +0000   Wed, 31 Jul 2024 20:28:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:28:49 +0000   Wed, 31 Jul 2024 20:28:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:28:49 +0000   Wed, 31 Jul 2024 20:28:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:28:49 +0000   Wed, 31 Jul 2024 20:28:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-430887-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d94d6e3c9c5248219d2ba3137d0cbf54
	  System UUID:                d94d6e3c-9c52-4821-9d2b-a3137d0cbf54
	  Boot ID:                    12aeb95e-ca69-400d-a151-3febbd846662
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lt5n8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-430887-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-fbt5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m57s
	  kube-system                 kube-apiserver-ha-430887-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-controller-manager-ha-430887-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-proxy-4mft2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-scheduler-ha-430887-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-vip-ha-430887-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node ha-430887-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node ha-430887-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x7 over 4m57s)  kubelet          Node ha-430887-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	
	
	Name:               ha-430887-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_29_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:29:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:33:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:29:52 +0000   Wed, 31 Jul 2024 20:29:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:29:52 +0000   Wed, 31 Jul 2024 20:29:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:29:52 +0000   Wed, 31 Jul 2024 20:29:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:29:52 +0000   Wed, 31 Jul 2024 20:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    ha-430887-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e62b3ad5cf6244ff98aa273667a5b995
	  System UUID:                e62b3ad5-cf62-44ff-98aa-273667a5b995
	  Boot ID:                    2766dd92-7fcf-4d2d-8743-67c3234050f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gg2tl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m55s
	  kube-system                 kube-proxy-8cqlp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m55s (x2 over 3m55s)  kubelet          Node ha-430887-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x2 over 3m55s)  kubelet          Node ha-430887-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x2 over 3m55s)  kubelet          Node ha-430887-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal  NodeReady                3m36s                  kubelet          Node ha-430887-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 20:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047211] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.034798] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.637543] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.671838] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.539428] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.396030] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053894] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.164850] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.142838] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.248524] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +3.814747] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.436744] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.058175] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.102873] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.077595] kauditd_printk_skb: 79 callbacks suppressed
	[Jul31 20:26] kauditd_printk_skb: 18 callbacks suppressed
	[ +24.630735] kauditd_printk_skb: 38 callbacks suppressed
	[Jul31 20:27] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338] <==
	{"level":"warn","ts":"2024-07-31T20:33:16.45075Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.550514Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.55619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.650707Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.74992Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.777287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.783022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.78625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.794176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.801341Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.819875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.824346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.832371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.841359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.877828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.880982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.889372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.896893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.903279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.906127Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.909027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.914418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.921385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.926581Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:33:16.950753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:33:16 up 7 min,  0 users,  load average: 0.21, 0.21, 0.13
	Linux ha-430887 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d] <==
	I0731 20:32:46.553572       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:32:56.555749       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:32:56.555793       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:32:56.555927       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:32:56.555947       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:32:56.556033       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:32:56.556051       1 main.go:299] handling current node
	I0731 20:32:56.556062       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:32:56.556079       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:33:06.556455       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:33:06.556540       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:33:06.556680       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:33:06.556704       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:33:06.556765       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:33:06.556784       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:33:06.556839       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:33:06.556858       1 main.go:299] handling current node
	I0731 20:33:16.552752       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:33:16.552794       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:33:16.552910       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:33:16.552929       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:33:16.552988       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:33:16.553006       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:33:16.553055       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:33:16.553072       1 main.go:299] handling current node
	
	
	==> kube-apiserver [03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a] <==
	I0731 20:25:55.463416       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0731 20:25:55.470202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I0731 20:25:55.471194       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:25:55.476969       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 20:25:56.052016       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 20:25:56.577915       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 20:25:56.588744       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 20:25:56.598282       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 20:26:11.513080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 20:26:11.663027       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0731 20:28:49.961390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58044: use of closed network connection
	E0731 20:28:50.140377       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58066: use of closed network connection
	E0731 20:28:50.315266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58082: use of closed network connection
	E0731 20:28:50.517398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58100: use of closed network connection
	E0731 20:28:50.694991       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33722: use of closed network connection
	E0731 20:28:50.864532       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33742: use of closed network connection
	E0731 20:28:51.035811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33774: use of closed network connection
	E0731 20:28:51.206946       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33806: use of closed network connection
	E0731 20:28:51.386884       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33828: use of closed network connection
	E0731 20:28:51.668773       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33848: use of closed network connection
	E0731 20:28:51.841520       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33864: use of closed network connection
	E0731 20:28:52.015308       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33880: use of closed network connection
	E0731 20:28:52.189900       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33890: use of closed network connection
	E0731 20:28:52.403559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33900: use of closed network connection
	E0731 20:28:52.569367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33926: use of closed network connection
	
	
	==> kube-controller-manager [31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0] <==
	I0731 20:28:46.315843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.598752ms"
	I0731 20:28:46.444818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.853042ms"
	E0731 20:28:46.444932       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:28:46.525462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.395943ms"
	I0731 20:28:46.613567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.147896ms"
	I0731 20:28:46.720384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.761956ms"
	E0731 20:28:46.721036       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:28:46.721296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="166.401µs"
	I0731 20:28:46.726483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.388µs"
	I0731 20:28:47.005667       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.946µs"
	I0731 20:28:49.170350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.876629ms"
	I0731 20:28:49.170534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.332µs"
	I0731 20:28:49.296229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.911µs"
	I0731 20:28:49.403746       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.281742ms"
	I0731 20:28:49.406351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.926µs"
	I0731 20:28:49.483106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.8971ms"
	I0731 20:28:49.483295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.169µs"
	E0731 20:29:21.293594       1 certificate_controller.go:146] Sync csr-9vxhw failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-9vxhw": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:29:21.552609       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-430887-m04\" does not exist"
	I0731 20:29:21.595879       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-430887-m04" podCIDRs=["10.244.3.0/24"]
	I0731 20:29:25.817773       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-430887-m04"
	I0731 20:29:40.246506       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-430887-m04"
	I0731 20:30:30.854112       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-430887-m04"
	I0731 20:30:30.994059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.970894ms"
	I0731 20:30:30.996192       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.596µs"
	
	
	==> kube-proxy [2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115] <==
	I0731 20:26:12.695961       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:26:12.714715       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	I0731 20:26:12.753496       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:26:12.753551       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:26:12.753569       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:26:12.756334       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:26:12.756594       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:26:12.756620       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:26:12.758303       1 config.go:192] "Starting service config controller"
	I0731 20:26:12.758567       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:26:12.758618       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:26:12.758634       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:26:12.759554       1 config.go:319] "Starting node config controller"
	I0731 20:26:12.759581       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:26:12.858985       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:26:12.859016       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:26:12.859747       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756] <==
	E0731 20:28:46.289774       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b668a1b0-4434-4037-a0a1-0461e748521d(default/busybox-fc5497c4f-tkmzn) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-tkmzn"
	E0731 20:28:46.289894       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tkmzn\": pod busybox-fc5497c4f-tkmzn is already assigned to node \"ha-430887\"" pod="default/busybox-fc5497c4f-tkmzn"
	I0731 20:28:46.290007       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-tkmzn" node="ha-430887"
	E0731 20:28:46.289647       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lt5n8\": pod busybox-fc5497c4f-lt5n8 is already assigned to node \"ha-430887-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lt5n8" node="ha-430887-m03"
	E0731 20:28:46.290769       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4c829ff4-83b3-406d-8dbf-77dda232f563(default/busybox-fc5497c4f-lt5n8) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-lt5n8"
	E0731 20:28:46.290864       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lt5n8\": pod busybox-fc5497c4f-lt5n8 is already assigned to node \"ha-430887-m03\"" pod="default/busybox-fc5497c4f-lt5n8"
	I0731 20:28:46.290899       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lt5n8" node="ha-430887-m03"
	E0731 20:29:21.609655       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8cqlp\": pod kube-proxy-8cqlp is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8cqlp" node="ha-430887-m04"
	E0731 20:29:21.611396       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8cqlp\": pod kube-proxy-8cqlp is already assigned to node \"ha-430887-m04\"" pod="kube-system/kube-proxy-8cqlp"
	E0731 20:29:21.612033       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gh25l\": pod kindnet-gh25l is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gh25l" node="ha-430887-m04"
	E0731 20:29:21.612122       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3f5b9250-8827-4c9b-a14d-dc47fd5cb3bc(kube-system/kindnet-gh25l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gh25l"
	E0731 20:29:21.612289       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gh25l\": pod kindnet-gh25l is already assigned to node \"ha-430887-m04\"" pod="kube-system/kindnet-gh25l"
	I0731 20:29:21.612350       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gh25l" node="ha-430887-m04"
	E0731 20:29:21.645766       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fl7g6\": pod kube-proxy-fl7g6 is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fl7g6" node="ha-430887-m04"
	E0731 20:29:21.647529       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 53424cfa-1677-492b-aa43-9b9ab353b4de(kube-system/kube-proxy-fl7g6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fl7g6"
	E0731 20:29:21.647684       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fl7g6\": pod kube-proxy-fl7g6 is already assigned to node \"ha-430887-m04\"" pod="kube-system/kube-proxy-fl7g6"
	I0731 20:29:21.647860       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fl7g6" node="ha-430887-m04"
	E0731 20:29:21.651039       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gg2tl\": pod kindnet-gg2tl is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gg2tl" node="ha-430887-m04"
	E0731 20:29:21.651183       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6681caa0-2da7-43db-a4ec-2270d5130ba8(kube-system/kindnet-gg2tl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gg2tl"
	E0731 20:29:21.651253       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gg2tl\": pod kindnet-gg2tl is already assigned to node \"ha-430887-m04\"" pod="kube-system/kindnet-gg2tl"
	I0731 20:29:21.651291       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gg2tl" node="ha-430887-m04"
	E0731 20:29:21.743717       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-c2tw8\": pod kindnet-c2tw8 is already assigned to node \"ha-430887-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-c2tw8" node="ha-430887-m04"
	E0731 20:29:21.745715       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2315da31-4f29-4139-ac00-c3cc1bcd457d(kube-system/kindnet-c2tw8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-c2tw8"
	E0731 20:29:21.745783       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-c2tw8\": pod kindnet-c2tw8 is already assigned to node \"ha-430887-m04\"" pod="kube-system/kindnet-c2tw8"
	I0731 20:29:21.745826       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-c2tw8" node="ha-430887-m04"
	
	
	==> kubelet <==
	Jul 31 20:28:56 ha-430887 kubelet[1378]: E0731 20:28:56.468622    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:28:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:28:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:28:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:28:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:29:56 ha-430887 kubelet[1378]: E0731 20:29:56.467064    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:29:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:29:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:29:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:29:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:30:56 ha-430887 kubelet[1378]: E0731 20:30:56.467405    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:30:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:30:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:30:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:30:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:31:56 ha-430887 kubelet[1378]: E0731 20:31:56.466743    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:31:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:31:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:31:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:31:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:32:56 ha-430887 kubelet[1378]: E0731 20:32:56.466268    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:32:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:32:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:32:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:32:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430887 -n ha-430887
helpers_test.go:261: (dbg) Run:  kubectl --context ha-430887 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (392.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-430887 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-430887 -v=7 --alsologtostderr
E0731 20:34:31.358188 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:34:59.041083 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-430887 -v=7 --alsologtostderr: exit status 82 (2m1.79859608s)

                                                
                                                
-- stdout --
	* Stopping node "ha-430887-m04"  ...
	* Stopping node "ha-430887-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:33:18.359167 1117770 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:33:18.359441 1117770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:33:18.359451 1117770 out.go:304] Setting ErrFile to fd 2...
	I0731 20:33:18.359456 1117770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:33:18.359636 1117770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:33:18.359888 1117770 out.go:298] Setting JSON to false
	I0731 20:33:18.359988 1117770 mustload.go:65] Loading cluster: ha-430887
	I0731 20:33:18.360382 1117770 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:33:18.360474 1117770 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:33:18.360656 1117770 mustload.go:65] Loading cluster: ha-430887
	I0731 20:33:18.360785 1117770 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:33:18.360821 1117770 stop.go:39] StopHost: ha-430887-m04
	I0731 20:33:18.361166 1117770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:18.361218 1117770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:18.376669 1117770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0731 20:33:18.377106 1117770 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:18.377703 1117770 main.go:141] libmachine: Using API Version  1
	I0731 20:33:18.377729 1117770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:18.378105 1117770 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:18.380668 1117770 out.go:177] * Stopping node "ha-430887-m04"  ...
	I0731 20:33:18.382184 1117770 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 20:33:18.382218 1117770 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:33:18.382440 1117770 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 20:33:18.382463 1117770 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:33:18.385145 1117770 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:18.385577 1117770 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:29:07 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:33:18.385597 1117770 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:33:18.385722 1117770 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:33:18.385909 1117770 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:33:18.386081 1117770 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:33:18.386238 1117770 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:33:18.470625 1117770 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 20:33:18.522925 1117770 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 20:33:18.576299 1117770 main.go:141] libmachine: Stopping "ha-430887-m04"...
	I0731 20:33:18.576341 1117770 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:33:18.577901 1117770 main.go:141] libmachine: (ha-430887-m04) Calling .Stop
	I0731 20:33:18.581379 1117770 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 0/120
	I0731 20:33:19.701376 1117770 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:33:19.702825 1117770 main.go:141] libmachine: Machine "ha-430887-m04" was stopped.
	I0731 20:33:19.702847 1117770 stop.go:75] duration metric: took 1.320683336s to stop
	I0731 20:33:19.702879 1117770 stop.go:39] StopHost: ha-430887-m03
	I0731 20:33:19.703210 1117770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:33:19.703261 1117770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:33:19.717916 1117770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0731 20:33:19.718453 1117770 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:33:19.718992 1117770 main.go:141] libmachine: Using API Version  1
	I0731 20:33:19.719014 1117770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:33:19.719317 1117770 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:33:19.721911 1117770 out.go:177] * Stopping node "ha-430887-m03"  ...
	I0731 20:33:19.723044 1117770 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 20:33:19.723074 1117770 main.go:141] libmachine: (ha-430887-m03) Calling .DriverName
	I0731 20:33:19.723287 1117770 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 20:33:19.723316 1117770 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHHostname
	I0731 20:33:19.726102 1117770 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:19.726491 1117770 main.go:141] libmachine: (ha-430887-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:fa:c0", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:27:46 +0000 UTC Type:0 Mac:52:54:00:52:fa:c0 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-430887-m03 Clientid:01:52:54:00:52:fa:c0}
	I0731 20:33:19.726522 1117770 main.go:141] libmachine: (ha-430887-m03) DBG | domain ha-430887-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:52:fa:c0 in network mk-ha-430887
	I0731 20:33:19.726665 1117770 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHPort
	I0731 20:33:19.726889 1117770 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHKeyPath
	I0731 20:33:19.727022 1117770 main.go:141] libmachine: (ha-430887-m03) Calling .GetSSHUsername
	I0731 20:33:19.727188 1117770 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m03/id_rsa Username:docker}
	I0731 20:33:19.805710 1117770 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 20:33:19.857228 1117770 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 20:33:19.910418 1117770 main.go:141] libmachine: Stopping "ha-430887-m03"...
	I0731 20:33:19.910464 1117770 main.go:141] libmachine: (ha-430887-m03) Calling .GetState
	I0731 20:33:19.912149 1117770 main.go:141] libmachine: (ha-430887-m03) Calling .Stop
	I0731 20:33:19.915447 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 0/120
	I0731 20:33:20.916944 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 1/120
	I0731 20:33:21.918463 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 2/120
	I0731 20:33:22.919943 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 3/120
	I0731 20:33:23.921240 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 4/120
	I0731 20:33:24.923308 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 5/120
	I0731 20:33:25.925208 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 6/120
	I0731 20:33:26.926656 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 7/120
	I0731 20:33:27.928348 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 8/120
	I0731 20:33:28.930113 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 9/120
	I0731 20:33:29.932320 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 10/120
	I0731 20:33:30.934108 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 11/120
	I0731 20:33:31.935603 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 12/120
	I0731 20:33:32.937216 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 13/120
	I0731 20:33:33.938725 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 14/120
	I0731 20:33:34.940197 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 15/120
	I0731 20:33:35.942004 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 16/120
	I0731 20:33:36.943683 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 17/120
	I0731 20:33:37.945301 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 18/120
	I0731 20:33:38.946935 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 19/120
	I0731 20:33:39.948679 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 20/120
	I0731 20:33:40.950070 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 21/120
	I0731 20:33:41.951371 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 22/120
	I0731 20:33:42.952890 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 23/120
	I0731 20:33:43.954695 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 24/120
	I0731 20:33:44.956763 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 25/120
	I0731 20:33:45.958446 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 26/120
	I0731 20:33:46.959937 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 27/120
	I0731 20:33:47.961698 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 28/120
	I0731 20:33:48.963483 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 29/120
	I0731 20:33:49.965061 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 30/120
	I0731 20:33:50.967027 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 31/120
	I0731 20:33:51.968511 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 32/120
	I0731 20:33:52.969898 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 33/120
	I0731 20:33:53.971371 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 34/120
	I0731 20:33:54.973083 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 35/120
	I0731 20:33:55.974722 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 36/120
	I0731 20:33:56.976466 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 37/120
	I0731 20:33:57.978573 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 38/120
	I0731 20:33:58.980054 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 39/120
	I0731 20:33:59.981840 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 40/120
	I0731 20:34:00.983174 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 41/120
	I0731 20:34:01.985022 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 42/120
	I0731 20:34:02.986674 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 43/120
	I0731 20:34:03.987887 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 44/120
	I0731 20:34:04.989957 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 45/120
	I0731 20:34:05.991344 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 46/120
	I0731 20:34:06.992922 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 47/120
	I0731 20:34:07.994593 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 48/120
	I0731 20:34:08.995955 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 49/120
	I0731 20:34:09.997964 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 50/120
	I0731 20:34:10.999327 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 51/120
	I0731 20:34:12.000762 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 52/120
	I0731 20:34:13.002120 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 53/120
	I0731 20:34:14.003453 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 54/120
	I0731 20:34:15.005329 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 55/120
	I0731 20:34:16.006655 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 56/120
	I0731 20:34:17.007960 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 57/120
	I0731 20:34:18.009264 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 58/120
	I0731 20:34:19.010586 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 59/120
	I0731 20:34:20.012202 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 60/120
	I0731 20:34:21.013449 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 61/120
	I0731 20:34:22.014900 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 62/120
	I0731 20:34:23.017038 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 63/120
	I0731 20:34:24.018352 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 64/120
	I0731 20:34:25.020153 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 65/120
	I0731 20:34:26.021476 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 66/120
	I0731 20:34:27.023029 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 67/120
	I0731 20:34:28.024732 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 68/120
	I0731 20:34:29.026293 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 69/120
	I0731 20:34:30.028161 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 70/120
	I0731 20:34:31.029474 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 71/120
	I0731 20:34:32.030758 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 72/120
	I0731 20:34:33.032026 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 73/120
	I0731 20:34:34.033427 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 74/120
	I0731 20:34:35.035147 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 75/120
	I0731 20:34:36.036562 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 76/120
	I0731 20:34:37.037819 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 77/120
	I0731 20:34:38.039099 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 78/120
	I0731 20:34:39.040439 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 79/120
	I0731 20:34:40.042194 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 80/120
	I0731 20:34:41.043593 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 81/120
	I0731 20:34:42.045118 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 82/120
	I0731 20:34:43.046576 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 83/120
	I0731 20:34:44.048452 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 84/120
	I0731 20:34:45.050340 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 85/120
	I0731 20:34:46.051636 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 86/120
	I0731 20:34:47.053092 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 87/120
	I0731 20:34:48.054548 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 88/120
	I0731 20:34:49.055920 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 89/120
	I0731 20:34:50.057996 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 90/120
	I0731 20:34:51.059358 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 91/120
	I0731 20:34:52.060721 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 92/120
	I0731 20:34:53.062120 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 93/120
	I0731 20:34:54.063755 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 94/120
	I0731 20:34:55.065368 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 95/120
	I0731 20:34:56.066701 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 96/120
	I0731 20:34:57.067949 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 97/120
	I0731 20:34:58.069305 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 98/120
	I0731 20:34:59.070616 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 99/120
	I0731 20:35:00.071990 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 100/120
	I0731 20:35:01.073443 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 101/120
	I0731 20:35:02.075504 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 102/120
	I0731 20:35:03.077070 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 103/120
	I0731 20:35:04.078775 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 104/120
	I0731 20:35:05.080812 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 105/120
	I0731 20:35:06.082569 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 106/120
	I0731 20:35:07.083789 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 107/120
	I0731 20:35:08.086027 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 108/120
	I0731 20:35:09.087431 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 109/120
	I0731 20:35:10.088668 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 110/120
	I0731 20:35:11.090102 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 111/120
	I0731 20:35:12.091731 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 112/120
	I0731 20:35:13.092989 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 113/120
	I0731 20:35:14.094377 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 114/120
	I0731 20:35:15.096015 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 115/120
	I0731 20:35:16.097294 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 116/120
	I0731 20:35:17.098843 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 117/120
	I0731 20:35:18.100375 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 118/120
	I0731 20:35:19.101860 1117770 main.go:141] libmachine: (ha-430887-m03) Waiting for machine to stop 119/120
	I0731 20:35:20.102730 1117770 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 20:35:20.102804 1117770 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 20:35:20.105077 1117770 out.go:177] 
	W0731 20:35:20.106495 1117770 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 20:35:20.106519 1117770 out.go:239] * 
	* 
	W0731 20:35:20.110528 1117770 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 20:35:20.111947 1117770 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-430887 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-430887 --wait=true -v=7 --alsologtostderr
E0731 20:37:00.018978 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:38:23.063227 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:39:31.358237 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-430887 --wait=true -v=7 --alsologtostderr: (4m27.886223856s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-430887
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430887 -n ha-430887
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-430887 logs -n 25: (1.654158928s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m02:/home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m02 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04:/home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m04 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp testdata/cp-test.txt                                                | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887:/home/docker/cp-test_ha-430887-m04_ha-430887.txt                       |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887 sudo cat                                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887.txt                                 |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m02:/home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m02 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03:/home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m03 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-430887 node stop m02 -v=7                                                     | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-430887 node start m02 -v=7                                                    | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-430887 -v=7                                                           | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-430887 -v=7                                                                | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-430887 --wait=true -v=7                                                    | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:35 UTC | 31 Jul 24 20:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-430887                                                                | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:39 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:35:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:35:20.162311 1118228 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:35:20.162575 1118228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:35:20.162583 1118228 out.go:304] Setting ErrFile to fd 2...
	I0731 20:35:20.162587 1118228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:35:20.162791 1118228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:35:20.163321 1118228 out.go:298] Setting JSON to false
	I0731 20:35:20.164449 1118228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15471,"bootTime":1722442649,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:35:20.164526 1118228 start.go:139] virtualization: kvm guest
	I0731 20:35:20.167014 1118228 out.go:177] * [ha-430887] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:35:20.168751 1118228 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:35:20.168771 1118228 notify.go:220] Checking for updates...
	I0731 20:35:20.171645 1118228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:35:20.172948 1118228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:35:20.174239 1118228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:35:20.175390 1118228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:35:20.176629 1118228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:35:20.178365 1118228 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:35:20.178471 1118228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:35:20.178857 1118228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:20.178935 1118228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:20.195271 1118228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0731 20:35:20.195788 1118228 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:20.196457 1118228 main.go:141] libmachine: Using API Version  1
	I0731 20:35:20.196506 1118228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:20.196928 1118228 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:20.197149 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:35:20.232614 1118228 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:35:20.233928 1118228 start.go:297] selected driver: kvm2
	I0731 20:35:20.233941 1118228 start.go:901] validating driver "kvm2" against &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.83 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:35:20.234108 1118228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:35:20.234458 1118228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:35:20.234549 1118228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:35:20.250826 1118228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:35:20.251543 1118228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:35:20.251611 1118228 cni.go:84] Creating CNI manager for ""
	I0731 20:35:20.251623 1118228 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 20:35:20.251689 1118228 start.go:340] cluster config:
	{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.83 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:35:20.251828 1118228 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:35:20.253534 1118228 out.go:177] * Starting "ha-430887" primary control-plane node in "ha-430887" cluster
	I0731 20:35:20.254768 1118228 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:35:20.254812 1118228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:35:20.254831 1118228 cache.go:56] Caching tarball of preloaded images
	I0731 20:35:20.254922 1118228 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:35:20.254934 1118228 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:35:20.255095 1118228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:35:20.255304 1118228 start.go:360] acquireMachinesLock for ha-430887: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:35:20.255359 1118228 start.go:364] duration metric: took 33.478µs to acquireMachinesLock for "ha-430887"
	I0731 20:35:20.255379 1118228 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:35:20.255389 1118228 fix.go:54] fixHost starting: 
	I0731 20:35:20.255656 1118228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:20.255695 1118228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:20.270221 1118228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0731 20:35:20.270667 1118228 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:20.271163 1118228 main.go:141] libmachine: Using API Version  1
	I0731 20:35:20.271188 1118228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:20.271571 1118228 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:20.271742 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:35:20.271895 1118228 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:35:20.273528 1118228 fix.go:112] recreateIfNeeded on ha-430887: state=Running err=<nil>
	W0731 20:35:20.273550 1118228 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:35:20.276236 1118228 out.go:177] * Updating the running kvm2 "ha-430887" VM ...
	I0731 20:35:20.277623 1118228 machine.go:94] provisionDockerMachine start ...
	I0731 20:35:20.277645 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:35:20.277879 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.280422 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.280856 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.280875 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.281030 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:20.281226 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.281368 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.281489 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:20.281650 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:20.281886 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:35:20.281898 1118228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:35:20.384209 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887
	
	I0731 20:35:20.384234 1118228 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:35:20.384498 1118228 buildroot.go:166] provisioning hostname "ha-430887"
	I0731 20:35:20.384528 1118228 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:35:20.384696 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.386915 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.387303 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.387332 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.387447 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:20.387650 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.387888 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.388064 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:20.388262 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:20.388435 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:35:20.388447 1118228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430887 && echo "ha-430887" | sudo tee /etc/hostname
	I0731 20:35:20.508473 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887
	
	I0731 20:35:20.508514 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.511422 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.511787 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.511824 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.512032 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:20.512274 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.512450 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.512591 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:20.512778 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:20.513005 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:35:20.513029 1118228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430887/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:35:20.617354 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:35:20.617388 1118228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:35:20.617425 1118228 buildroot.go:174] setting up certificates
	I0731 20:35:20.617435 1118228 provision.go:84] configureAuth start
	I0731 20:35:20.617446 1118228 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:35:20.617752 1118228 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:35:20.620579 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.620983 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.621007 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.621207 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.623498 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.623818 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.623838 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.623997 1118228 provision.go:143] copyHostCerts
	I0731 20:35:20.624037 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:35:20.624081 1118228 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:35:20.624105 1118228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:35:20.624184 1118228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:35:20.624303 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:35:20.624333 1118228 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:35:20.624341 1118228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:35:20.624386 1118228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:35:20.624450 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:35:20.624482 1118228 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:35:20.624501 1118228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:35:20.624539 1118228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:35:20.624610 1118228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.ha-430887 san=[127.0.0.1 192.168.39.195 ha-430887 localhost minikube]
	I0731 20:35:20.936480 1118228 provision.go:177] copyRemoteCerts
	I0731 20:35:20.936550 1118228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:35:20.936576 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.939130 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.939395 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.939421 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.939612 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:20.939835 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.940005 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:20.940186 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:35:21.021942 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:35:21.022028 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 20:35:21.044974 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:35:21.045045 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:35:21.067902 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:35:21.067975 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:35:21.090490 1118228 provision.go:87] duration metric: took 473.039314ms to configureAuth
	I0731 20:35:21.090520 1118228 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:35:21.090731 1118228 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:35:21.090805 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:21.093360 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:21.093727 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:21.093758 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:21.093909 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:21.094136 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:21.094308 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:21.094422 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:21.094579 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:21.094749 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:35:21.094762 1118228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:36:51.808776 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:36:51.808809 1118228 machine.go:97] duration metric: took 1m31.531172246s to provisionDockerMachine
	I0731 20:36:51.808825 1118228 start.go:293] postStartSetup for "ha-430887" (driver="kvm2")
	I0731 20:36:51.808837 1118228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:36:51.808862 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:51.809229 1118228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:36:51.809259 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:51.812520 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:51.813018 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:51.813054 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:51.813224 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:51.813416 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:51.813584 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:51.813703 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:36:51.894393 1118228 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:36:51.898662 1118228 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:36:51.898691 1118228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:36:51.898761 1118228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:36:51.898849 1118228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 20:36:51.898865 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 20:36:51.898959 1118228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:36:51.908067 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:36:51.933264 1118228 start.go:296] duration metric: took 124.426167ms for postStartSetup
	I0731 20:36:51.933311 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:51.933628 1118228 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 20:36:51.933657 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:51.936398 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:51.936743 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:51.936768 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:51.936987 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:51.937194 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:51.937360 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:51.937500 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	W0731 20:36:52.017337 1118228 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 20:36:52.017368 1118228 fix.go:56] duration metric: took 1m31.761980229s for fixHost
	I0731 20:36:52.017396 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:52.020253 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.020633 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:52.020662 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.020834 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:52.021024 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:52.021175 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:52.021298 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:52.021452 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:36:52.021627 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:36:52.021637 1118228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:36:52.124235 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722458212.080094625
	
	I0731 20:36:52.124261 1118228 fix.go:216] guest clock: 1722458212.080094625
	I0731 20:36:52.124271 1118228 fix.go:229] Guest: 2024-07-31 20:36:52.080094625 +0000 UTC Remote: 2024-07-31 20:36:52.017377706 +0000 UTC m=+91.893847600 (delta=62.716919ms)
	I0731 20:36:52.124300 1118228 fix.go:200] guest clock delta is within tolerance: 62.716919ms
	I0731 20:36:52.124308 1118228 start.go:83] releasing machines lock for "ha-430887", held for 1m31.868937112s
	I0731 20:36:52.124334 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:52.124618 1118228 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:36:52.127021 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.127368 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:52.127389 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.127640 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:52.128194 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:52.128370 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:52.128441 1118228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:36:52.128482 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:52.128578 1118228 ssh_runner.go:195] Run: cat /version.json
	I0731 20:36:52.128600 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:52.131010 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.131144 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.131390 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:52.131414 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.131512 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:52.131648 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:52.131683 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.131715 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:52.131806 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:52.131894 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:52.131972 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:52.132034 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:36:52.132132 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:52.132256 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:36:52.212329 1118228 ssh_runner.go:195] Run: systemctl --version
	I0731 20:36:52.236778 1118228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:36:52.390764 1118228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:36:52.402229 1118228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:36:52.402296 1118228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:36:52.411228 1118228 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 20:36:52.411249 1118228 start.go:495] detecting cgroup driver to use...
	I0731 20:36:52.411309 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:36:52.427792 1118228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:36:52.441147 1118228 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:36:52.441194 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:36:52.453976 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:36:52.466822 1118228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:36:52.630407 1118228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:36:52.771840 1118228 docker.go:233] disabling docker service ...
	I0731 20:36:52.771919 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:36:52.787172 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:36:52.799429 1118228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:36:52.939477 1118228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:36:53.078755 1118228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:36:53.091991 1118228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:36:53.108885 1118228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:36:53.108952 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.118192 1118228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:36:53.118249 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.127620 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.136815 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.145845 1118228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:36:53.154961 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.163914 1118228 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.173710 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.182916 1118228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:36:53.190958 1118228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:36:53.199237 1118228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:36:53.340424 1118228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:36:56.749061 1118228 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.408586374s)
	I0731 20:36:56.749099 1118228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:36:56.749169 1118228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:36:56.754456 1118228 start.go:563] Will wait 60s for crictl version
	I0731 20:36:56.754519 1118228 ssh_runner.go:195] Run: which crictl
	I0731 20:36:56.757927 1118228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:36:56.794666 1118228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:36:56.794755 1118228 ssh_runner.go:195] Run: crio --version
	I0731 20:36:56.820027 1118228 ssh_runner.go:195] Run: crio --version
	I0731 20:36:56.847412 1118228 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:36:56.848833 1118228 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:36:56.851389 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:56.851745 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:56.851773 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:56.851967 1118228 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:36:56.856192 1118228 kubeadm.go:883] updating cluster {Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.83 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:36:56.856377 1118228 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:36:56.856438 1118228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:36:56.894628 1118228 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:36:56.894651 1118228 crio.go:433] Images already preloaded, skipping extraction
	I0731 20:36:56.894706 1118228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:36:56.925007 1118228 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:36:56.925032 1118228 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:36:56.925045 1118228 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.30.3 crio true true} ...
	I0731 20:36:56.925158 1118228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-430887 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:36:56.925236 1118228 ssh_runner.go:195] Run: crio config
	I0731 20:36:56.967715 1118228 cni.go:84] Creating CNI manager for ""
	I0731 20:36:56.967741 1118228 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 20:36:56.967750 1118228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:36:56.967782 1118228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430887 NodeName:ha-430887 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:36:56.967917 1118228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430887"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:36:56.967937 1118228 kube-vip.go:115] generating kube-vip config ...
	I0731 20:36:56.967979 1118228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 20:36:56.978318 1118228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 20:36:56.978428 1118228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 20:36:56.978505 1118228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:36:56.986784 1118228 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:36:56.986847 1118228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 20:36:56.994891 1118228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 20:36:57.009059 1118228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:36:57.023615 1118228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 20:36:57.037897 1118228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 20:36:57.054020 1118228 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 20:36:57.057305 1118228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:36:57.194514 1118228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:36:57.208407 1118228 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887 for IP: 192.168.39.195
	I0731 20:36:57.208442 1118228 certs.go:194] generating shared ca certs ...
	I0731 20:36:57.208462 1118228 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:36:57.208669 1118228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:36:57.208736 1118228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:36:57.208749 1118228 certs.go:256] generating profile certs ...
	I0731 20:36:57.208854 1118228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key
	I0731 20:36:57.208888 1118228 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.221e426d
	I0731 20:36:57.208908 1118228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.221e426d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.149 192.168.39.44 192.168.39.254]
	I0731 20:36:57.438216 1118228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.221e426d ...
	I0731 20:36:57.438251 1118228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.221e426d: {Name:mkd60e10541584eec4c9989b951286c51783db93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:36:57.438427 1118228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.221e426d ...
	I0731 20:36:57.438439 1118228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.221e426d: {Name:mk0a8b3d414b20a472b716a1362fed3b3a750ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:36:57.438513 1118228 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.221e426d -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt
	I0731 20:36:57.438651 1118228 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.221e426d -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key
	I0731 20:36:57.438779 1118228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key
	I0731 20:36:57.438795 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:36:57.438808 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:36:57.438821 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:36:57.438834 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:36:57.438847 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:36:57.438860 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:36:57.438871 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:36:57.438881 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:36:57.438927 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 20:36:57.438958 1118228 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 20:36:57.438965 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:36:57.438984 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:36:57.439003 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:36:57.439025 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:36:57.439061 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:36:57.439087 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:36:57.439098 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 20:36:57.439107 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 20:36:57.439721 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:36:57.463409 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:36:57.484436 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:36:57.505595 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:36:57.527046 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 20:36:57.547643 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:36:57.568233 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:36:57.589102 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:36:57.610725 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:36:57.631375 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 20:36:57.651991 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 20:36:57.678585 1118228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:36:57.705429 1118228 ssh_runner.go:195] Run: openssl version
	I0731 20:36:57.710597 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:36:57.719787 1118228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:36:57.723659 1118228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:36:57.723695 1118228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:36:57.729235 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:36:57.737308 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 20:36:57.746541 1118228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 20:36:57.750495 1118228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 20:36:57.750535 1118228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 20:36:57.755540 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 20:36:57.764357 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 20:36:57.773528 1118228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 20:36:57.777434 1118228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 20:36:57.777466 1118228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 20:36:57.782341 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:36:57.790273 1118228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:36:57.794217 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:36:57.799129 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:36:57.803905 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:36:57.808787 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:36:57.813792 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:36:57.818602 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:36:57.823533 1118228 kubeadm.go:392] StartCluster: {Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.83 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:36:57.823650 1118228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:36:57.823697 1118228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:36:57.856982 1118228 cri.go:89] found id: "db8045a86010e8896c91369775fb85c60c3de20e1c43761bf04f45756ef5189c"
	I0731 20:36:57.857007 1118228 cri.go:89] found id: "b29e43e77e100722452dd2891e40117ca378f479f8a698dea015f68732a14711"
	I0731 20:36:57.857011 1118228 cri.go:89] found id: "033f180dfe3e3003ef7c66c3813e060312602c0cbfe718203e9a3a9617c19a4f"
	I0731 20:36:57.857014 1118228 cri.go:89] found id: "6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02"
	I0731 20:36:57.857017 1118228 cri.go:89] found id: "431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9"
	I0731 20:36:57.857021 1118228 cri.go:89] found id: "a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a"
	I0731 20:36:57.857023 1118228 cri.go:89] found id: "63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d"
	I0731 20:36:57.857028 1118228 cri.go:89] found id: "2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115"
	I0731 20:36:57.857032 1118228 cri.go:89] found id: "87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94"
	I0731 20:36:57.857039 1118228 cri.go:89] found id: "03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a"
	I0731 20:36:57.857043 1118228 cri.go:89] found id: "019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756"
	I0731 20:36:57.857047 1118228 cri.go:89] found id: "5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338"
	I0731 20:36:57.857051 1118228 cri.go:89] found id: "31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0"
	I0731 20:36:57.857054 1118228 cri.go:89] found id: ""
	I0731 20:36:57.857107 1118228 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.653985985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458388653962828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb06acc8-7aa1-433f-97fd-bca466a10b75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.654600454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98b24bbf-f53e-40d2-b7b0-5021766be0f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.654670842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98b24bbf-f53e-40d2-b7b0-5021766be0f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.655120940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0f9d6d5314f828124074d1f8942d814ad229f24cd6043c6dd25457736d5ee8,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458308468015854,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722458268472775814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722458267469039756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458261463631372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5720ff2aa5d083902177ccc0a0d9fb72a54818ffdf2555b52374af4801a4d0f,PodSandboxId:2860c6703133aeaf94ee73650597080755fe705e0a88c5bafe98245e10bb64ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722458257575965455,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd405e99c37c92f096d02d53b1746380ce9b46f33c282225e1c3f54bf2ca96c,PodSandboxId:fa66d796b0c21e9a5861f1ea8885c6ba9fcc89d84bf04612f24de3904a4c9089,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722458238853740374,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380c723c996f1b4dd3c3fdf0d8cb6c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520,PodSandboxId:ff5c7461ce1e763578c38e07a162c23411d580eb076d5235f8fd8b54bb2d502d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722458224339542405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce,PodSandboxId:5e6ab10f8cba822d617ef6ae172f980d60eb19d44c74f40f3c0ff541e8704709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224379069901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d,PodSandboxId:499062c60ea08147d337be8c35b9c54d72f25dbfcc6a20f986c204fb4f39f647,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722458224275273223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1,PodSandboxId:da3887f33eff5ea2127d01fcb2e2785de06fee5d85c59e7e1baaa6f43b9b3f8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224325648519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b,PodSandboxId:62d4f1a1400045d76a0793b42450da5315cad90527b3d3e54a9c4d48ccba944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722458224194112115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6,PodSandboxId:9ad3244ebf70d7395ba87af8c58e58e0e8644c2155fdd759dabd59ee91fa7104,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722458224226701800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7
a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722458224159993645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d193
93b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722458224050524647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457728387872762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587826857015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kube
rnetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587459364853,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457575608884896,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457572328099829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722457550283072418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722457550254701021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98b24bbf-f53e-40d2-b7b0-5021766be0f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.699835918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=469258a0-94ff-44c7-ada1-6e15a072e891 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.699924473Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=469258a0-94ff-44c7-ada1-6e15a072e891 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.707841913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1b9202d-988e-4294-b842-cf85b3339e25 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.708548670Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458388708519465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1b9202d-988e-4294-b842-cf85b3339e25 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.709422090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f29c2ced-02e1-40e4-9810-2cbfe0a83665 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.709519837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f29c2ced-02e1-40e4-9810-2cbfe0a83665 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.710168464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0f9d6d5314f828124074d1f8942d814ad229f24cd6043c6dd25457736d5ee8,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458308468015854,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722458268472775814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722458267469039756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458261463631372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5720ff2aa5d083902177ccc0a0d9fb72a54818ffdf2555b52374af4801a4d0f,PodSandboxId:2860c6703133aeaf94ee73650597080755fe705e0a88c5bafe98245e10bb64ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722458257575965455,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd405e99c37c92f096d02d53b1746380ce9b46f33c282225e1c3f54bf2ca96c,PodSandboxId:fa66d796b0c21e9a5861f1ea8885c6ba9fcc89d84bf04612f24de3904a4c9089,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722458238853740374,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380c723c996f1b4dd3c3fdf0d8cb6c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520,PodSandboxId:ff5c7461ce1e763578c38e07a162c23411d580eb076d5235f8fd8b54bb2d502d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722458224339542405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce,PodSandboxId:5e6ab10f8cba822d617ef6ae172f980d60eb19d44c74f40f3c0ff541e8704709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224379069901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d,PodSandboxId:499062c60ea08147d337be8c35b9c54d72f25dbfcc6a20f986c204fb4f39f647,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722458224275273223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1,PodSandboxId:da3887f33eff5ea2127d01fcb2e2785de06fee5d85c59e7e1baaa6f43b9b3f8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224325648519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b,PodSandboxId:62d4f1a1400045d76a0793b42450da5315cad90527b3d3e54a9c4d48ccba944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722458224194112115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6,PodSandboxId:9ad3244ebf70d7395ba87af8c58e58e0e8644c2155fdd759dabd59ee91fa7104,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722458224226701800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7
a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722458224159993645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d193
93b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722458224050524647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457728387872762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587826857015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kube
rnetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587459364853,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457575608884896,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457572328099829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722457550283072418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722457550254701021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f29c2ced-02e1-40e4-9810-2cbfe0a83665 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.756046655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54bf11fc-dd9c-4bfe-95de-c0478c1e29f5 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.756126597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54bf11fc-dd9c-4bfe-95de-c0478c1e29f5 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.757169011Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41ab3285-05cd-40d0-8b5b-fdf49e963eae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.757592233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458388757569152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41ab3285-05cd-40d0-8b5b-fdf49e963eae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.758011169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d702603b-c8af-4334-a05f-58b463c2a884 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.758122641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d702603b-c8af-4334-a05f-58b463c2a884 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.762374049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0f9d6d5314f828124074d1f8942d814ad229f24cd6043c6dd25457736d5ee8,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458308468015854,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722458268472775814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722458267469039756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458261463631372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5720ff2aa5d083902177ccc0a0d9fb72a54818ffdf2555b52374af4801a4d0f,PodSandboxId:2860c6703133aeaf94ee73650597080755fe705e0a88c5bafe98245e10bb64ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722458257575965455,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd405e99c37c92f096d02d53b1746380ce9b46f33c282225e1c3f54bf2ca96c,PodSandboxId:fa66d796b0c21e9a5861f1ea8885c6ba9fcc89d84bf04612f24de3904a4c9089,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722458238853740374,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380c723c996f1b4dd3c3fdf0d8cb6c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520,PodSandboxId:ff5c7461ce1e763578c38e07a162c23411d580eb076d5235f8fd8b54bb2d502d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722458224339542405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce,PodSandboxId:5e6ab10f8cba822d617ef6ae172f980d60eb19d44c74f40f3c0ff541e8704709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224379069901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d,PodSandboxId:499062c60ea08147d337be8c35b9c54d72f25dbfcc6a20f986c204fb4f39f647,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722458224275273223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1,PodSandboxId:da3887f33eff5ea2127d01fcb2e2785de06fee5d85c59e7e1baaa6f43b9b3f8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224325648519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b,PodSandboxId:62d4f1a1400045d76a0793b42450da5315cad90527b3d3e54a9c4d48ccba944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722458224194112115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6,PodSandboxId:9ad3244ebf70d7395ba87af8c58e58e0e8644c2155fdd759dabd59ee91fa7104,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722458224226701800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7
a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722458224159993645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d193
93b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722458224050524647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457728387872762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587826857015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kube
rnetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587459364853,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457575608884896,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457572328099829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722457550283072418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722457550254701021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d702603b-c8af-4334-a05f-58b463c2a884 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.811985216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f3e5424-c1c7-439a-bab8-8d448ddfecdb name=/runtime.v1.RuntimeService/Version
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.812101765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f3e5424-c1c7-439a-bab8-8d448ddfecdb name=/runtime.v1.RuntimeService/Version
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.815059860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=daed8f19-3e1b-444e-a9e4-611560539d9b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.815566771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458388815540800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=daed8f19-3e1b-444e-a9e4-611560539d9b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.816097610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d316e49a-2e88-41b4-9879-d76fed81301d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.816200024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d316e49a-2e88-41b4-9879-d76fed81301d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:39:48 ha-430887 crio[3774]: time="2024-07-31 20:39:48.816654525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0f9d6d5314f828124074d1f8942d814ad229f24cd6043c6dd25457736d5ee8,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458308468015854,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722458268472775814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722458267469039756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458261463631372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5720ff2aa5d083902177ccc0a0d9fb72a54818ffdf2555b52374af4801a4d0f,PodSandboxId:2860c6703133aeaf94ee73650597080755fe705e0a88c5bafe98245e10bb64ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722458257575965455,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd405e99c37c92f096d02d53b1746380ce9b46f33c282225e1c3f54bf2ca96c,PodSandboxId:fa66d796b0c21e9a5861f1ea8885c6ba9fcc89d84bf04612f24de3904a4c9089,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722458238853740374,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380c723c996f1b4dd3c3fdf0d8cb6c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520,PodSandboxId:ff5c7461ce1e763578c38e07a162c23411d580eb076d5235f8fd8b54bb2d502d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722458224339542405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce,PodSandboxId:5e6ab10f8cba822d617ef6ae172f980d60eb19d44c74f40f3c0ff541e8704709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224379069901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d,PodSandboxId:499062c60ea08147d337be8c35b9c54d72f25dbfcc6a20f986c204fb4f39f647,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722458224275273223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1,PodSandboxId:da3887f33eff5ea2127d01fcb2e2785de06fee5d85c59e7e1baaa6f43b9b3f8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224325648519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b,PodSandboxId:62d4f1a1400045d76a0793b42450da5315cad90527b3d3e54a9c4d48ccba944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722458224194112115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6,PodSandboxId:9ad3244ebf70d7395ba87af8c58e58e0e8644c2155fdd759dabd59ee91fa7104,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722458224226701800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7
a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722458224159993645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d193
93b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722458224050524647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457728387872762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587826857015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kube
rnetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587459364853,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457575608884896,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457572328099829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722457550283072418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722457550254701021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d316e49a-2e88-41b4-9879-d76fed81301d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ed0f9d6d5314f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   434a21f7beec6       storage-provisioner
	faa9efba25e3e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Running             kube-apiserver            3                   f8f7b843226da       kube-apiserver-ha-430887
	a0c6cc7ab3ded       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Running             kube-controller-manager   2                   79acd5a39095a       kube-controller-manager-ha-430887
	34f2b676b4617       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   434a21f7beec6       storage-provisioner
	c5720ff2aa5d0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   2860c6703133a       busybox-fc5497c4f-tkmzn
	ccd405e99c37c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   fa66d796b0c21       kube-vip-ha-430887
	8dd05ed18c213       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   5e6ab10f8cba8       coredns-7db6d8ff4d-tkm49
	76b2da629018b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   ff5c7461ce1e7       kube-proxy-m49fz
	748dac0b04e4b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   da3887f33eff5       coredns-7db6d8ff4d-rhlnq
	3586b36e485e4       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   499062c60ea08       kindnet-xmjzn
	b254f1ebef43b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   9ad3244ebf70d       etcd-ha-430887
	104ea95fae730       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   62d4f1a140004       kube-scheduler-ha-430887
	a34eb23fafa7e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   79acd5a39095a       kube-controller-manager-ha-430887
	6e8f03aa65b75       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   f8f7b843226da       kube-apiserver-ha-430887
	b61252be77d59       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   94749dc3b8a05       busybox-fc5497c4f-tkmzn
	6804a88577bb9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   364daaeb39b2a       coredns-7db6d8ff4d-tkm49
	a3a604ebae38f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   c5096ff8ccf93       coredns-7db6d8ff4d-rhlnq
	63366667a98d5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   75a5e3ddf89ae       kindnet-xmjzn
	2c3cfe9da185a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   45f974d9fa89f       kube-proxy-m49fz
	019dbd42b381f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   e2bba8d22a3ce       kube-scheduler-ha-430887
	5d05fc1d45725       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   9da4629d918d3       etcd-ha-430887
	
	
	==> coredns [6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02] <==
	[INFO] 10.244.1.2:51933 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175849s
	[INFO] 10.244.1.2:36619 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118307s
	[INFO] 10.244.2.2:51012 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102784s
	[INFO] 10.244.2.2:46299 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151507s
	[INFO] 10.244.2.2:32857 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075858s
	[INFO] 10.244.0.4:40942 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087643s
	[INFO] 10.244.0.4:34086 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001741525s
	[INFO] 10.244.0.4:52613 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051957s
	[INFO] 10.244.0.4:48069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001210819s
	[INFO] 10.244.1.2:57723 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084885s
	[INFO] 10.244.1.2:43800 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099387s
	[INFO] 10.244.2.2:48837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134956s
	[INFO] 10.244.2.2:46133 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008076s
	[INFO] 10.244.1.2:52179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123976s
	[INFO] 10.244.1.2:38064 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121703s
	[INFO] 10.244.2.2:38356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183387s
	[INFO] 10.244.2.2:45481 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000194275s
	[INFO] 10.244.2.2:42027 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138509s
	[INFO] 10.244.2.2:47364 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140763s
	[INFO] 10.244.0.4:57224 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075497s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[909917760]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 20:37:13.228) (total time: 10001ms):
	Trace[909917760]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:37:23.229)
	Trace[909917760]: [10.00156638s] [10.00156638s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46132->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46132->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce] <==
	Trace[1004538133]: [10.241951709s] [10.241951709s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39990->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39998->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[13707683]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 20:37:16.248) (total time: 10019ms):
	Trace[13707683]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39998->10.96.0.1:443: read: connection reset by peer 10019ms (20:37:26.267)
	Trace[13707683]: [10.019150778s] [10.019150778s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39998->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a] <==
	[INFO] 10.244.0.4:35814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077793s
	[INFO] 10.244.0.4:57174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050499s
	[INFO] 10.244.1.2:35721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152974s
	[INFO] 10.244.1.2:52365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099511s
	[INFO] 10.244.2.2:56276 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095649s
	[INFO] 10.244.2.2:33350 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089031s
	[INFO] 10.244.0.4:39526 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089609s
	[INFO] 10.244.0.4:32892 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000036988s
	[INFO] 10.244.0.4:54821 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028078s
	[INFO] 10.244.0.4:40693 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000023261s
	[INFO] 10.244.1.2:56760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130165s
	[INFO] 10.244.1.2:49192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109643s
	[INFO] 10.244.0.4:55943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117823s
	[INFO] 10.244.0.4:40806 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010301s
	[INFO] 10.244.0.4:50703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076201s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1911&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[2113701684]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 20:35:08.117) (total time: 11133ms):
	Trace[2113701684]: ---"Objects listed" error:Unauthorized 11133ms (20:35:19.250)
	Trace[2113701684]: [11.133957975s] [11.133957975s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-430887
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_25_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:25:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:39:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:37:48 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:37:48 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:37:48 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:37:48 +0000   Wed, 31 Jul 2024 20:26:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-430887
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d983ecff48054665b7d9523d0704c9fc
	  System UUID:                d983ecff-4805-4665-b7d9-523d0704c9fc
	  Boot ID:                    713545a1-3d19-4194-8d69-3cd83a4e4967
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tkmzn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-rhlnq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-tkm49             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-430887                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-xmjzn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-430887             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-430887    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-m49fz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-430887             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-430887                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m                     kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-430887 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-430887 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-430887 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-430887 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-430887 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-430887 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-430887 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Warning  ContainerGCFailed        2m53s (x2 over 3m53s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           115s                   node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal   RegisteredNode           106s                   node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	
	
	Name:               ha-430887-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_27_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:27:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:39:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:38:30 +0000   Wed, 31 Jul 2024 20:37:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:38:30 +0000   Wed, 31 Jul 2024 20:37:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:38:30 +0000   Wed, 31 Jul 2024 20:37:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:38:30 +0000   Wed, 31 Jul 2024 20:37:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-430887-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec9db720f1af4a7b8ddebc5f57826488
	  System UUID:                ec9db720-f1af-4a7b-8dde-bc5f57826488
	  Boot ID:                    c0cff76a-37e5-4ff7-a710-fedd91287908
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hhwcx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-430887-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-49h86                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-430887-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-430887-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-hsd92                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-430887-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-430887-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 84s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-430887-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-430887-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-430887-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           12m                    node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  NodeReady                12m                    kubelet          Node ha-430887-m02 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  NodeNotReady             9m19s                  node-controller  Node ha-430887-m02 status is now: NodeNotReady
	  Normal  Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node ha-430887-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node ha-430887-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s (x7 over 2m29s)  kubelet          Node ha-430887-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           115s                   node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  RegisteredNode           106s                   node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  RegisteredNode           31s                    node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	
	
	Name:               ha-430887-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_28_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:39:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:39:24 +0000   Wed, 31 Jul 2024 20:38:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:39:24 +0000   Wed, 31 Jul 2024 20:38:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:39:24 +0000   Wed, 31 Jul 2024 20:38:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:39:24 +0000   Wed, 31 Jul 2024 20:38:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-430887-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d94d6e3c9c5248219d2ba3137d0cbf54
	  System UUID:                d94d6e3c-9c52-4821-9d2b-a3137d0cbf54
	  Boot ID:                    ae67bb79-fb4c-4379-b119-6dbd8b1b7d55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lt5n8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-430887-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-fbt5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-430887-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-430887-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-4mft2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-430887-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-430887-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-430887-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-430887-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-430887-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal   RegisteredNode           115s               node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal   RegisteredNode           106s               node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	  Normal   NodeNotReady             75s                node-controller  Node ha-430887-m03 status is now: NodeNotReady
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 56s                kubelet          Node ha-430887-m03 has been rebooted, boot id: ae67bb79-fb4c-4379-b119-6dbd8b1b7d55
	  Normal   NodeReady                56s                kubelet          Node ha-430887-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  55s (x2 over 56s)  kubelet          Node ha-430887-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 56s)  kubelet          Node ha-430887-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 56s)  kubelet          Node ha-430887-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node ha-430887-m03 event: Registered Node ha-430887-m03 in Controller
	
	
	Name:               ha-430887-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_29_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:29:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:39:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:39:40 +0000   Wed, 31 Jul 2024 20:39:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:39:40 +0000   Wed, 31 Jul 2024 20:39:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:39:40 +0000   Wed, 31 Jul 2024 20:39:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:39:40 +0000   Wed, 31 Jul 2024 20:39:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    ha-430887-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e62b3ad5cf6244ff98aa273667a5b995
	  System UUID:                e62b3ad5-cf62-44ff-98aa-273667a5b995
	  Boot ID:                    cd5876c1-2dfd-4349-a3a2-5e689fc17a20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gg2tl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-8cqlp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-430887-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-430887-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-430887-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-430887-m04 status is now: NodeReady
	  Normal   RegisteredNode           115s               node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   RegisteredNode           106s               node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   NodeNotReady             75s                node-controller  Node ha-430887-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-430887-m04 has been rebooted, boot id: cd5876c1-2dfd-4349-a3a2-5e689fc17a20
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-430887-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-430887-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-430887-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-430887-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s                 kubelet          Node ha-430887-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.396030] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053894] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.164850] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.142838] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.248524] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +3.814747] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.436744] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.058175] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.102873] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.077595] kauditd_printk_skb: 79 callbacks suppressed
	[Jul31 20:26] kauditd_printk_skb: 18 callbacks suppressed
	[ +24.630735] kauditd_printk_skb: 38 callbacks suppressed
	[Jul31 20:27] kauditd_printk_skb: 28 callbacks suppressed
	[Jul31 20:36] systemd-fstab-generator[3687]: Ignoring "noauto" option for root device
	[  +0.147345] systemd-fstab-generator[3699]: Ignoring "noauto" option for root device
	[  +0.161550] systemd-fstab-generator[3713]: Ignoring "noauto" option for root device
	[  +0.146078] systemd-fstab-generator[3725]: Ignoring "noauto" option for root device
	[  +0.256781] systemd-fstab-generator[3753]: Ignoring "noauto" option for root device
	[  +3.854286] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[Jul31 20:37] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.146598] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.051910] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.845025] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.600230] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338] <==
	{"level":"info","ts":"2024-07-31T20:35:21.200919Z","caller":"traceutil/trace.go:171","msg":"trace[1612728749] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"618.987905ms","start":"2024-07-31T20:35:20.581928Z","end":"2024-07-31T20:35:21.200916Z","steps":["trace[1612728749] 'agreement among raft nodes before linearized reading'  (duration: 618.975056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:35:21.200932Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:35:20.581913Z","time spent":"619.014065ms","remote":"127.0.0.1:40510","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/07/31 20:35:21 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 20:35:21 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T20:35:21.206296Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6657042673363248962,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-31T20:35:21.266455Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:35:21.266508Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T20:35:21.266571Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T20:35:21.266689Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266726Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266746Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266773Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266827Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266875Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266887Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266892Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.2669Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.266927Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.267038Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.26712Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.26728Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.267315Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.269818Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-31T20:35:21.269943Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-31T20:35:21.269976Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-430887","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	
	
	==> etcd [b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6] <==
	{"level":"warn","ts":"2024-07-31T20:38:47.899841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:38:47.966362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:38:48.002759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T20:38:48.097361Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.44:2380/version","remote-member-id":"efe64029709f6fc1","error":"Get \"https://192.168.39.44:2380/version\": dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:48.098234Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"efe64029709f6fc1","error":"Get \"https://192.168.39.44:2380/version\": dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:49.849858Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"efe64029709f6fc1","rtt":"0s","error":"dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:49.850057Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"efe64029709f6fc1","rtt":"0s","error":"dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:52.100764Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.44:2380/version","remote-member-id":"efe64029709f6fc1","error":"Get \"https://192.168.39.44:2380/version\": dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:52.10099Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"efe64029709f6fc1","error":"Get \"https://192.168.39.44:2380/version\": dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:54.850266Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"efe64029709f6fc1","rtt":"0s","error":"dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:54.850487Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"efe64029709f6fc1","rtt":"0s","error":"dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:56.103285Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.44:2380/version","remote-member-id":"efe64029709f6fc1","error":"Get \"https://192.168.39.44:2380/version\": dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:56.103343Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"efe64029709f6fc1","error":"Get \"https://192.168.39.44:2380/version\": dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:59.850387Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"efe64029709f6fc1","rtt":"0s","error":"dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:38:59.850589Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"efe64029709f6fc1","rtt":"0s","error":"dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:39:00.105842Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.44:2380/version","remote-member-id":"efe64029709f6fc1","error":"Get \"https://192.168.39.44:2380/version\": dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:39:00.105905Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"efe64029709f6fc1","error":"Get \"https://192.168.39.44:2380/version\": dial tcp 192.168.39.44:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-31T20:39:03.148726Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:03.173826Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"324857e3fe6e5c62","to":"efe64029709f6fc1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-31T20:39:03.174018Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:03.174333Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"324857e3fe6e5c62","to":"efe64029709f6fc1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-31T20:39:03.174407Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:03.18162Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:03.185655Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"warn","ts":"2024-07-31T20:39:03.192815Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.44:33348","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:39:49 up 14 min,  0 users,  load average: 0.08, 0.25, 0.20
	Linux ha-430887 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d] <==
	I0731 20:39:15.213522       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:39:25.204462       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:39:25.204568       1 main.go:299] handling current node
	I0731 20:39:25.204595       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:39:25.204613       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:39:25.204949       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:39:25.205019       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:39:25.205119       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:39:25.205200       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:39:35.212420       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:39:35.212463       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:39:35.212613       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:39:35.212639       1 main.go:299] handling current node
	I0731 20:39:35.212651       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:39:35.212656       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:39:35.212760       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:39:35.212779       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:39:45.204789       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:39:45.204882       1 main.go:299] handling current node
	I0731 20:39:45.204909       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:39:45.204927       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:39:45.205071       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:39:45.205102       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:39:45.205236       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:39:45.205273       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d] <==
	I0731 20:34:56.552873       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:34:56.552936       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:34:56.553058       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:34:56.553081       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:34:56.553161       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:34:56.553178       1 main.go:299] handling current node
	I0731 20:34:56.553190       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:34:56.553204       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:35:06.553256       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:35:06.553316       1 main.go:299] handling current node
	I0731 20:35:06.553329       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:35:06.553335       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:35:06.553463       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:35:06.553481       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:35:06.553531       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:35:06.553547       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:35:16.552777       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:35:16.552820       1 main.go:299] handling current node
	I0731 20:35:16.552838       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:35:16.552846       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:35:16.552985       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:35:16.553015       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:35:16.553108       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:35:16.553185       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	E0731 20:35:19.248469       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381] <==
	I0731 20:37:04.550318       1 options.go:221] external host was not specified, using 192.168.39.195
	I0731 20:37:04.553114       1 server.go:148] Version: v1.30.3
	I0731 20:37:04.553310       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:37:05.226461       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 20:37:05.250790       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 20:37:05.256105       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 20:37:05.258158       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 20:37:05.258329       1 instance.go:299] Using reconciler: lease
	W0731 20:37:25.226315       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0731 20:37:25.226519       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0731 20:37:25.259859       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9] <==
	I0731 20:37:50.095684       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 20:37:50.095880       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 20:37:50.193357       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 20:37:50.193764       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 20:37:50.198620       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 20:37:50.198692       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 20:37:50.198756       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 20:37:50.199193       1 aggregator.go:165] initial CRD sync complete...
	I0731 20:37:50.199203       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 20:37:50.199209       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 20:37:50.199216       1 cache.go:39] Caches are synced for autoregister controller
	I0731 20:37:50.198764       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 20:37:50.198773       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 20:37:50.199839       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	W0731 20:37:50.206998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.44]
	I0731 20:37:50.244394       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 20:37:50.248640       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 20:37:50.248668       1 policy_source.go:224] refreshing policies
	I0731 20:37:50.275807       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 20:37:50.308650       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:37:50.315836       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0731 20:37:50.320898       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 20:37:51.103870       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 20:37:51.435857       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.195 192.168.39.44]
	W0731 20:38:01.435926       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.195]
	
	
	==> kube-controller-manager [a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7] <==
	I0731 20:38:03.122710       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-430887-m02"
	I0731 20:38:03.122787       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-430887-m03"
	I0731 20:38:03.122839       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-430887-m04"
	I0731 20:38:03.122967       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 20:38:03.123075       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 20:38:03.173179       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 20:38:03.173303       1 shared_informer.go:320] Caches are synced for TTL
	I0731 20:38:03.564322       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 20:38:03.624800       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 20:38:03.624835       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 20:38:06.483588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.829µs"
	I0731 20:38:07.696916       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-w6qzr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-w6qzr\": the object has been modified; please apply your changes to the latest version and try again"
	I0731 20:38:07.697295       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7657b8ea-8862-4a57-bb4b-205b2b38b301", APIVersion:"v1", ResourceVersion:"284", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-w6qzr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-w6qzr": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:38:07.717591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.652673ms"
	I0731 20:38:07.718343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="130.998µs"
	I0731 20:38:10.774475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.734229ms"
	I0731 20:38:10.774656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.314µs"
	I0731 20:38:17.380072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.328386ms"
	I0731 20:38:17.380252       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.595µs"
	I0731 20:38:34.644391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.496269ms"
	I0731 20:38:34.644992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.234µs"
	I0731 20:38:55.065030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.201µs"
	I0731 20:39:15.314105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.360225ms"
	I0731 20:39:15.314294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.86µs"
	I0731 20:39:40.998907       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-430887-m04"
	
	
	==> kube-controller-manager [a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3] <==
	I0731 20:37:05.588803       1 serving.go:380] Generated self-signed cert in-memory
	I0731 20:37:06.080670       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 20:37:06.080701       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:37:06.082020       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 20:37:06.082214       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0731 20:37:06.082268       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 20:37:06.082451       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0731 20:37:26.265264       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.195:8443/healthz\": dial tcp 192.168.39.195:8443: connect: connection refused"
	
	
	==> kube-proxy [2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115] <==
	E0731 20:34:04.448625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:04.448688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:04.448819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:11.360497       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:11.361831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:11.360857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:11.360919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:11.361965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:11.361942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:20.578176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:20.578668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:20.578439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:20.578742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:23.649406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:23.649482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:35.937622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:35.937759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:42.081089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:42.081416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:45.153408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:45.153461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:35:03.585743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:35:03.585847       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:35:18.945205       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:35:18.945263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520] <==
	I0731 20:37:05.525227       1 server_linux.go:69] "Using iptables proxy"
	E0731 20:37:06.465069       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 20:37:09.536632       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 20:37:12.608905       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 20:37:18.752563       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 20:37:31.040612       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0731 20:37:49.024806       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	I0731 20:37:49.131206       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:37:49.134061       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:37:49.134236       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:37:49.169290       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:37:49.169496       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:37:49.169535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:37:49.172072       1 config.go:192] "Starting service config controller"
	I0731 20:37:49.172109       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:37:49.172261       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:37:49.172284       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:37:49.172812       1 config.go:319] "Starting node config controller"
	I0731 20:37:49.172844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:37:49.273569       1 shared_informer.go:320] Caches are synced for node config
	I0731 20:37:49.273736       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:37:49.273768       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756] <==
	W0731 20:35:14.392120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 20:35:14.392212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 20:35:14.557344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 20:35:14.557421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 20:35:14.905006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:14.905088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:35:14.924984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 20:35:14.925053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 20:35:15.099406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 20:35:15.099496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 20:35:15.184070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:15.184168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 20:35:16.073162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:16.073306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 20:35:16.145853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 20:35:16.145959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:35:16.254745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 20:35:16.254900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 20:35:16.570986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:35:16.571086       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:35:16.620480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 20:35:16.620535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 20:35:20.622413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:20.622446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:21.194722       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b] <==
	W0731 20:37:43.082400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.195:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.082470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.195:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:43.569247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.195:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.569301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.195:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:43.823836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.195:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.823895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.195:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:43.924952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.195:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.925015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.195:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:43.949705       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.949760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:44.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.195:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:44.781246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.195:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:44.970676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:44.970812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:45.677534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.195:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:45.677583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.195:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:46.371569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.195:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:46.371671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.195:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:46.728564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:46.728600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:47.627873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:47.627929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:48.143815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.195:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:48.143929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.195:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	I0731 20:38:02.074658       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 20:37:46 ha-430887 kubelet[1378]: E0731 20:37:46.400525    1378 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-430887.17e7666e07dbb854  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-430887,UID:586dfd40543240aed00e0fd894b7ddbf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-430887,},FirstTimestamp:2024-07-31 20:33:25.25677986 +0000 UTC m=+448.923072448,LastTimestamp:2024-07-31 20:33:25.25677986 +0000 UTC m=+448.923072448,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:
nil,ReportingController:kubelet,ReportingInstance:ha-430887,}"
	Jul 31 20:37:46 ha-430887 kubelet[1378]: I0731 20:37:46.401262    1378 status_manager.go:853] "Failed to get status for pod" podUID="b668a1b0-4434-4037-a0a1-0461e748521d" pod="default/busybox-fc5497c4f-tkmzn" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/pods/busybox-fc5497c4f-tkmzn\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 20:37:47 ha-430887 kubelet[1378]: I0731 20:37:47.452443    1378 scope.go:117] "RemoveContainer" containerID="a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3"
	Jul 31 20:37:48 ha-430887 kubelet[1378]: I0731 20:37:48.454231    1378 scope.go:117] "RemoveContainer" containerID="6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381"
	Jul 31 20:37:56 ha-430887 kubelet[1378]: E0731 20:37:56.476746    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:37:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:37:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:37:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:37:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:37:56 ha-430887 kubelet[1378]: I0731 20:37:56.521192    1378 scope.go:117] "RemoveContainer" containerID="033f180dfe3e3003ef7c66c3813e060312602c0cbfe718203e9a3a9617c19a4f"
	Jul 31 20:37:59 ha-430887 kubelet[1378]: I0731 20:37:59.452654    1378 scope.go:117] "RemoveContainer" containerID="34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369"
	Jul 31 20:37:59 ha-430887 kubelet[1378]: E0731 20:37:59.453383    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1eb16097-a994-4b42-b876-ebe7d6022be6)\"" pod="kube-system/storage-provisioner" podUID="1eb16097-a994-4b42-b876-ebe7d6022be6"
	Jul 31 20:38:02 ha-430887 kubelet[1378]: I0731 20:38:02.961763    1378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-tkmzn" podStartSLOduration=555.367332276 podStartE2EDuration="9m16.961746525s" podCreationTimestamp="2024-07-31 20:28:46 +0000 UTC" firstStartedPulling="2024-07-31 20:28:46.780594477 +0000 UTC m=+170.446887045" lastFinishedPulling="2024-07-31 20:28:48.375008713 +0000 UTC m=+172.041301294" observedRunningTime="2024-07-31 20:28:49.136198475 +0000 UTC m=+172.802491063" watchObservedRunningTime="2024-07-31 20:38:02.961746525 +0000 UTC m=+726.628039113"
	Jul 31 20:38:14 ha-430887 kubelet[1378]: I0731 20:38:14.467736    1378 scope.go:117] "RemoveContainer" containerID="34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369"
	Jul 31 20:38:14 ha-430887 kubelet[1378]: E0731 20:38:14.468219    1378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1eb16097-a994-4b42-b876-ebe7d6022be6)\"" pod="kube-system/storage-provisioner" podUID="1eb16097-a994-4b42-b876-ebe7d6022be6"
	Jul 31 20:38:27 ha-430887 kubelet[1378]: I0731 20:38:27.453229    1378 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-430887" podUID="516521a0-b217-407d-90ee-917c6cb6991a"
	Jul 31 20:38:27 ha-430887 kubelet[1378]: I0731 20:38:27.476183    1378 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-430887"
	Jul 31 20:38:27 ha-430887 kubelet[1378]: I0731 20:38:27.945795    1378 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-430887" podUID="516521a0-b217-407d-90ee-917c6cb6991a"
	Jul 31 20:38:28 ha-430887 kubelet[1378]: I0731 20:38:28.452966    1378 scope.go:117] "RemoveContainer" containerID="34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369"
	Jul 31 20:38:28 ha-430887 kubelet[1378]: I0731 20:38:28.969391    1378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-430887" podStartSLOduration=1.969368764 podStartE2EDuration="1.969368764s" podCreationTimestamp="2024-07-31 20:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 20:38:28.968501158 +0000 UTC m=+752.634793751" watchObservedRunningTime="2024-07-31 20:38:28.969368764 +0000 UTC m=+752.635661354"
	Jul 31 20:38:56 ha-430887 kubelet[1378]: E0731 20:38:56.467923    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:38:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:38:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:38:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:38:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:39:48.390240 1119636 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19360-1093692/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430887 -n ha-430887
helpers_test.go:261: (dbg) Run:  kubectl --context ha-430887 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (392.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 stop -v=7 --alsologtostderr
E0731 20:42:00.018977 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 stop -v=7 --alsologtostderr: exit status 82 (2m0.465004241s)

                                                
                                                
-- stdout --
	* Stopping node "ha-430887-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:40:07.653444 1120043 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:40:07.653583 1120043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:40:07.653596 1120043 out.go:304] Setting ErrFile to fd 2...
	I0731 20:40:07.653602 1120043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:40:07.654018 1120043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:40:07.654407 1120043 out.go:298] Setting JSON to false
	I0731 20:40:07.654528 1120043 mustload.go:65] Loading cluster: ha-430887
	I0731 20:40:07.655361 1120043 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:40:07.655488 1120043 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:40:07.655758 1120043 mustload.go:65] Loading cluster: ha-430887
	I0731 20:40:07.655959 1120043 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:40:07.656007 1120043 stop.go:39] StopHost: ha-430887-m04
	I0731 20:40:07.656538 1120043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:40:07.656596 1120043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:40:07.671747 1120043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I0731 20:40:07.672231 1120043 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:40:07.672735 1120043 main.go:141] libmachine: Using API Version  1
	I0731 20:40:07.672768 1120043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:40:07.673168 1120043 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:40:07.675690 1120043 out.go:177] * Stopping node "ha-430887-m04"  ...
	I0731 20:40:07.677057 1120043 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 20:40:07.677092 1120043 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:40:07.677322 1120043 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 20:40:07.677354 1120043 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:40:07.680504 1120043 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:40:07.681037 1120043 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:39:35 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:40:07.681065 1120043 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:40:07.681227 1120043 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:40:07.681415 1120043 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:40:07.681583 1120043 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:40:07.681710 1120043 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	I0731 20:40:07.765807 1120043 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 20:40:07.817637 1120043 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 20:40:07.869663 1120043 main.go:141] libmachine: Stopping "ha-430887-m04"...
	I0731 20:40:07.869696 1120043 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:40:07.871167 1120043 main.go:141] libmachine: (ha-430887-m04) Calling .Stop
	I0731 20:40:07.874767 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 0/120
	I0731 20:40:08.876085 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 1/120
	I0731 20:40:09.877499 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 2/120
	I0731 20:40:10.878807 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 3/120
	I0731 20:40:11.880426 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 4/120
	I0731 20:40:12.882014 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 5/120
	I0731 20:40:13.883287 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 6/120
	I0731 20:40:14.884657 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 7/120
	I0731 20:40:15.886095 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 8/120
	I0731 20:40:16.887716 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 9/120
	I0731 20:40:17.889016 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 10/120
	I0731 20:40:18.890589 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 11/120
	I0731 20:40:19.891913 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 12/120
	I0731 20:40:20.893294 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 13/120
	I0731 20:40:21.894801 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 14/120
	I0731 20:40:22.897008 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 15/120
	I0731 20:40:23.898452 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 16/120
	I0731 20:40:24.899913 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 17/120
	I0731 20:40:25.901492 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 18/120
	I0731 20:40:26.902930 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 19/120
	I0731 20:40:27.905225 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 20/120
	I0731 20:40:28.906500 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 21/120
	I0731 20:40:29.907895 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 22/120
	I0731 20:40:30.909371 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 23/120
	I0731 20:40:31.910978 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 24/120
	I0731 20:40:32.912379 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 25/120
	I0731 20:40:33.914468 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 26/120
	I0731 20:40:34.916455 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 27/120
	I0731 20:40:35.918513 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 28/120
	I0731 20:40:36.919736 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 29/120
	I0731 20:40:37.921911 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 30/120
	I0731 20:40:38.924056 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 31/120
	I0731 20:40:39.925697 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 32/120
	I0731 20:40:40.926951 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 33/120
	I0731 20:40:41.928311 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 34/120
	I0731 20:40:42.930242 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 35/120
	I0731 20:40:43.931549 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 36/120
	I0731 20:40:44.933108 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 37/120
	I0731 20:40:45.934387 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 38/120
	I0731 20:40:46.935798 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 39/120
	I0731 20:40:47.937878 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 40/120
	I0731 20:40:48.939505 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 41/120
	I0731 20:40:49.940997 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 42/120
	I0731 20:40:50.942815 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 43/120
	I0731 20:40:51.944400 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 44/120
	I0731 20:40:52.946204 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 45/120
	I0731 20:40:53.947526 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 46/120
	I0731 20:40:54.948821 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 47/120
	I0731 20:40:55.950191 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 48/120
	I0731 20:40:56.951442 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 49/120
	I0731 20:40:57.953515 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 50/120
	I0731 20:40:58.954804 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 51/120
	I0731 20:40:59.956043 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 52/120
	I0731 20:41:00.957366 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 53/120
	I0731 20:41:01.958795 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 54/120
	I0731 20:41:02.961248 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 55/120
	I0731 20:41:03.963152 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 56/120
	I0731 20:41:04.964783 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 57/120
	I0731 20:41:05.966616 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 58/120
	I0731 20:41:06.968046 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 59/120
	I0731 20:41:07.969776 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 60/120
	I0731 20:41:08.971027 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 61/120
	I0731 20:41:09.972360 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 62/120
	I0731 20:41:10.974548 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 63/120
	I0731 20:41:11.975899 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 64/120
	I0731 20:41:12.977802 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 65/120
	I0731 20:41:13.979257 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 66/120
	I0731 20:41:14.980584 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 67/120
	I0731 20:41:15.982021 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 68/120
	I0731 20:41:16.983255 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 69/120
	I0731 20:41:17.985272 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 70/120
	I0731 20:41:18.986563 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 71/120
	I0731 20:41:19.988568 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 72/120
	I0731 20:41:20.990416 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 73/120
	I0731 20:41:21.991687 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 74/120
	I0731 20:41:22.993701 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 75/120
	I0731 20:41:23.994953 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 76/120
	I0731 20:41:24.997258 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 77/120
	I0731 20:41:25.998650 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 78/120
	I0731 20:41:27.000724 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 79/120
	I0731 20:41:28.002177 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 80/120
	I0731 20:41:29.004362 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 81/120
	I0731 20:41:30.005757 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 82/120
	I0731 20:41:31.007319 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 83/120
	I0731 20:41:32.008652 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 84/120
	I0731 20:41:33.010070 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 85/120
	I0731 20:41:34.011783 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 86/120
	I0731 20:41:35.013253 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 87/120
	I0731 20:41:36.014680 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 88/120
	I0731 20:41:37.016029 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 89/120
	I0731 20:41:38.018208 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 90/120
	I0731 20:41:39.019420 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 91/120
	I0731 20:41:40.021278 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 92/120
	I0731 20:41:41.022631 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 93/120
	I0731 20:41:42.023891 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 94/120
	I0731 20:41:43.025733 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 95/120
	I0731 20:41:44.027037 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 96/120
	I0731 20:41:45.028208 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 97/120
	I0731 20:41:46.029501 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 98/120
	I0731 20:41:47.031459 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 99/120
	I0731 20:41:48.033541 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 100/120
	I0731 20:41:49.034698 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 101/120
	I0731 20:41:50.036747 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 102/120
	I0731 20:41:51.038147 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 103/120
	I0731 20:41:52.039640 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 104/120
	I0731 20:41:53.042082 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 105/120
	I0731 20:41:54.043773 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 106/120
	I0731 20:41:55.045152 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 107/120
	I0731 20:41:56.046999 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 108/120
	I0731 20:41:57.048232 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 109/120
	I0731 20:41:58.050178 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 110/120
	I0731 20:41:59.051652 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 111/120
	I0731 20:42:00.053382 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 112/120
	I0731 20:42:01.055006 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 113/120
	I0731 20:42:02.056321 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 114/120
	I0731 20:42:03.058119 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 115/120
	I0731 20:42:04.059431 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 116/120
	I0731 20:42:05.060821 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 117/120
	I0731 20:42:06.062053 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 118/120
	I0731 20:42:07.063723 1120043 main.go:141] libmachine: (ha-430887-m04) Waiting for machine to stop 119/120
	I0731 20:42:08.064641 1120043 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 20:42:08.064705 1120043 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 20:42:08.066510 1120043 out.go:177] 
	W0731 20:42:08.067683 1120043 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 20:42:08.067697 1120043 out.go:239] * 
	* 
	W0731 20:42:08.071600 1120043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 20:42:08.072961 1120043 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-430887 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr: exit status 3 (19.062521503s)

                                                
                                                
-- stdout --
	ha-430887
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-430887-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:42:08.123201 1120477 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:42:08.123482 1120477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:42:08.123492 1120477 out.go:304] Setting ErrFile to fd 2...
	I0731 20:42:08.123497 1120477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:42:08.123745 1120477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:42:08.124054 1120477 out.go:298] Setting JSON to false
	I0731 20:42:08.124085 1120477 mustload.go:65] Loading cluster: ha-430887
	I0731 20:42:08.124141 1120477 notify.go:220] Checking for updates...
	I0731 20:42:08.124547 1120477 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:42:08.124567 1120477 status.go:255] checking status of ha-430887 ...
	I0731 20:42:08.124985 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.125066 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.149087 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
	I0731 20:42:08.149523 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.150145 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.150165 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.150470 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.150686 1120477 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:42:08.152348 1120477 status.go:330] ha-430887 host status = "Running" (err=<nil>)
	I0731 20:42:08.152375 1120477 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:42:08.152748 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.152795 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.168307 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36219
	I0731 20:42:08.168721 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.169173 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.169195 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.169548 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.169730 1120477 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:42:08.172555 1120477 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:42:08.173032 1120477 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:42:08.173066 1120477 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:42:08.173245 1120477 host.go:66] Checking if "ha-430887" exists ...
	I0731 20:42:08.173593 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.173643 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.190086 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
	I0731 20:42:08.190461 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.191099 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.191120 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.191432 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.191635 1120477 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:42:08.191849 1120477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:42:08.191879 1120477 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:42:08.194994 1120477 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:42:08.195390 1120477 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:42:08.195418 1120477 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:42:08.195654 1120477 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:42:08.195872 1120477 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:42:08.196040 1120477 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:42:08.196207 1120477 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:42:08.274988 1120477 ssh_runner.go:195] Run: systemctl --version
	I0731 20:42:08.280321 1120477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:42:08.294020 1120477 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:42:08.294050 1120477 api_server.go:166] Checking apiserver status ...
	I0731 20:42:08.294089 1120477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:42:08.313470 1120477 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5071/cgroup
	W0731 20:42:08.322026 1120477 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5071/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:42:08.322077 1120477 ssh_runner.go:195] Run: ls
	I0731 20:42:08.326058 1120477 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:42:08.330313 1120477 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:42:08.330337 1120477 status.go:422] ha-430887 apiserver status = Running (err=<nil>)
	I0731 20:42:08.330347 1120477 status.go:257] ha-430887 status: &{Name:ha-430887 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:42:08.330362 1120477 status.go:255] checking status of ha-430887-m02 ...
	I0731 20:42:08.330647 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.330687 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.345860 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0731 20:42:08.346237 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.346716 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.346739 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.347051 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.347242 1120477 main.go:141] libmachine: (ha-430887-m02) Calling .GetState
	I0731 20:42:08.348773 1120477 status.go:330] ha-430887-m02 host status = "Running" (err=<nil>)
	I0731 20:42:08.348790 1120477 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:42:08.349109 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.349136 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.363957 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0731 20:42:08.364378 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.364856 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.364882 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.365188 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.365336 1120477 main.go:141] libmachine: (ha-430887-m02) Calling .GetIP
	I0731 20:42:08.368056 1120477 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:42:08.368533 1120477 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:37:08 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:42:08.368555 1120477 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:42:08.368736 1120477 host.go:66] Checking if "ha-430887-m02" exists ...
	I0731 20:42:08.369065 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.369092 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.383817 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0731 20:42:08.384194 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.384653 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.384681 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.384981 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.385147 1120477 main.go:141] libmachine: (ha-430887-m02) Calling .DriverName
	I0731 20:42:08.385316 1120477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:42:08.385334 1120477 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHHostname
	I0731 20:42:08.388051 1120477 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:42:08.388521 1120477 main.go:141] libmachine: (ha-430887-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:33", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:37:08 +0000 UTC Type:0 Mac:52:54:00:4a:64:33 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-430887-m02 Clientid:01:52:54:00:4a:64:33}
	I0731 20:42:08.388556 1120477 main.go:141] libmachine: (ha-430887-m02) DBG | domain ha-430887-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:4a:64:33 in network mk-ha-430887
	I0731 20:42:08.388660 1120477 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHPort
	I0731 20:42:08.388840 1120477 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHKeyPath
	I0731 20:42:08.389025 1120477 main.go:141] libmachine: (ha-430887-m02) Calling .GetSSHUsername
	I0731 20:42:08.389189 1120477 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m02/id_rsa Username:docker}
	I0731 20:42:08.471102 1120477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:42:08.484517 1120477 kubeconfig.go:125] found "ha-430887" server: "https://192.168.39.254:8443"
	I0731 20:42:08.484550 1120477 api_server.go:166] Checking apiserver status ...
	I0731 20:42:08.484622 1120477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:42:08.501037 1120477 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0731 20:42:08.510933 1120477 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:42:08.511011 1120477 ssh_runner.go:195] Run: ls
	I0731 20:42:08.515615 1120477 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:42:08.519800 1120477 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:42:08.519826 1120477 status.go:422] ha-430887-m02 apiserver status = Running (err=<nil>)
	I0731 20:42:08.519838 1120477 status.go:257] ha-430887-m02 status: &{Name:ha-430887-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:42:08.519857 1120477 status.go:255] checking status of ha-430887-m04 ...
	I0731 20:42:08.520237 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.520262 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.535296 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35149
	I0731 20:42:08.535777 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.536259 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.536278 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.536599 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.536859 1120477 main.go:141] libmachine: (ha-430887-m04) Calling .GetState
	I0731 20:42:08.538526 1120477 status.go:330] ha-430887-m04 host status = "Running" (err=<nil>)
	I0731 20:42:08.538543 1120477 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:42:08.538845 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.538887 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.553352 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0731 20:42:08.553837 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.554400 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.554437 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.554751 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.554943 1120477 main.go:141] libmachine: (ha-430887-m04) Calling .GetIP
	I0731 20:42:08.557825 1120477 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:42:08.558291 1120477 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:39:35 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:42:08.558318 1120477 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:42:08.558447 1120477 host.go:66] Checking if "ha-430887-m04" exists ...
	I0731 20:42:08.558735 1120477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:42:08.558773 1120477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:42:08.573175 1120477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0731 20:42:08.573643 1120477 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:42:08.574177 1120477 main.go:141] libmachine: Using API Version  1
	I0731 20:42:08.574204 1120477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:42:08.574504 1120477 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:42:08.574698 1120477 main.go:141] libmachine: (ha-430887-m04) Calling .DriverName
	I0731 20:42:08.574881 1120477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:42:08.574902 1120477 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHHostname
	I0731 20:42:08.577180 1120477 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:42:08.577529 1120477 main.go:141] libmachine: (ha-430887-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:27:cd", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:39:35 +0000 UTC Type:0 Mac:52:54:00:05:27:cd Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-430887-m04 Clientid:01:52:54:00:05:27:cd}
	I0731 20:42:08.577567 1120477 main.go:141] libmachine: (ha-430887-m04) DBG | domain ha-430887-m04 has defined IP address 192.168.39.83 and MAC address 52:54:00:05:27:cd in network mk-ha-430887
	I0731 20:42:08.577696 1120477 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHPort
	I0731 20:42:08.577867 1120477 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHKeyPath
	I0731 20:42:08.578025 1120477 main.go:141] libmachine: (ha-430887-m04) Calling .GetSSHUsername
	I0731 20:42:08.578154 1120477 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887-m04/id_rsa Username:docker}
	W0731 20:42:27.136310 1120477 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.83:22: connect: no route to host
	W0731 20:42:27.136438 1120477 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.83:22: connect: no route to host
	E0731 20:42:27.136458 1120477 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.83:22: connect: no route to host
	I0731 20:42:27.136466 1120477 status.go:257] ha-430887-m04 status: &{Name:ha-430887-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:42:27.136490 1120477 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.83:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-430887 -n ha-430887
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-430887 logs -n 25: (1.51398455s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-430887 ssh -n ha-430887-m02 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04:/home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m04 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp testdata/cp-test.txt                                                | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887:/home/docker/cp-test_ha-430887-m04_ha-430887.txt                       |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887 sudo cat                                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887.txt                                 |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m02:/home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m02 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m03:/home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n                                                                 | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | ha-430887-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-430887 ssh -n ha-430887-m03 sudo cat                                          | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC | 31 Jul 24 20:29 UTC |
	|         | /home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-430887 node stop m02 -v=7                                                     | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-430887 node start m02 -v=7                                                    | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-430887 -v=7                                                           | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-430887 -v=7                                                                | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-430887 --wait=true -v=7                                                    | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:35 UTC | 31 Jul 24 20:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-430887                                                                | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:39 UTC |                     |
	| node    | ha-430887 node delete m03 -v=7                                                   | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:39 UTC | 31 Jul 24 20:40 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-430887 stop -v=7                                                              | ha-430887 | jenkins | v1.33.1 | 31 Jul 24 20:40 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:35:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:35:20.162311 1118228 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:35:20.162575 1118228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:35:20.162583 1118228 out.go:304] Setting ErrFile to fd 2...
	I0731 20:35:20.162587 1118228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:35:20.162791 1118228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:35:20.163321 1118228 out.go:298] Setting JSON to false
	I0731 20:35:20.164449 1118228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15471,"bootTime":1722442649,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:35:20.164526 1118228 start.go:139] virtualization: kvm guest
	I0731 20:35:20.167014 1118228 out.go:177] * [ha-430887] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:35:20.168751 1118228 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:35:20.168771 1118228 notify.go:220] Checking for updates...
	I0731 20:35:20.171645 1118228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:35:20.172948 1118228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:35:20.174239 1118228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:35:20.175390 1118228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:35:20.176629 1118228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:35:20.178365 1118228 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:35:20.178471 1118228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:35:20.178857 1118228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:20.178935 1118228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:20.195271 1118228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0731 20:35:20.195788 1118228 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:20.196457 1118228 main.go:141] libmachine: Using API Version  1
	I0731 20:35:20.196506 1118228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:20.196928 1118228 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:20.197149 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:35:20.232614 1118228 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:35:20.233928 1118228 start.go:297] selected driver: kvm2
	I0731 20:35:20.233941 1118228 start.go:901] validating driver "kvm2" against &{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.83 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:35:20.234108 1118228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:35:20.234458 1118228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:35:20.234549 1118228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:35:20.250826 1118228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:35:20.251543 1118228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:35:20.251611 1118228 cni.go:84] Creating CNI manager for ""
	I0731 20:35:20.251623 1118228 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 20:35:20.251689 1118228 start.go:340] cluster config:
	{Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.83 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:35:20.251828 1118228 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:35:20.253534 1118228 out.go:177] * Starting "ha-430887" primary control-plane node in "ha-430887" cluster
	I0731 20:35:20.254768 1118228 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:35:20.254812 1118228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:35:20.254831 1118228 cache.go:56] Caching tarball of preloaded images
	I0731 20:35:20.254922 1118228 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:35:20.254934 1118228 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:35:20.255095 1118228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/config.json ...
	I0731 20:35:20.255304 1118228 start.go:360] acquireMachinesLock for ha-430887: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:35:20.255359 1118228 start.go:364] duration metric: took 33.478µs to acquireMachinesLock for "ha-430887"
	I0731 20:35:20.255379 1118228 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:35:20.255389 1118228 fix.go:54] fixHost starting: 
	I0731 20:35:20.255656 1118228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:20.255695 1118228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:20.270221 1118228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0731 20:35:20.270667 1118228 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:20.271163 1118228 main.go:141] libmachine: Using API Version  1
	I0731 20:35:20.271188 1118228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:20.271571 1118228 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:20.271742 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:35:20.271895 1118228 main.go:141] libmachine: (ha-430887) Calling .GetState
	I0731 20:35:20.273528 1118228 fix.go:112] recreateIfNeeded on ha-430887: state=Running err=<nil>
	W0731 20:35:20.273550 1118228 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:35:20.276236 1118228 out.go:177] * Updating the running kvm2 "ha-430887" VM ...
	I0731 20:35:20.277623 1118228 machine.go:94] provisionDockerMachine start ...
	I0731 20:35:20.277645 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:35:20.277879 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.280422 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.280856 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.280875 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.281030 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:20.281226 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.281368 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.281489 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:20.281650 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:20.281886 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:35:20.281898 1118228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:35:20.384209 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887
	
	I0731 20:35:20.384234 1118228 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:35:20.384498 1118228 buildroot.go:166] provisioning hostname "ha-430887"
	I0731 20:35:20.384528 1118228 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:35:20.384696 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.386915 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.387303 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.387332 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.387447 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:20.387650 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.387888 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.388064 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:20.388262 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:20.388435 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:35:20.388447 1118228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-430887 && echo "ha-430887" | sudo tee /etc/hostname
	I0731 20:35:20.508473 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-430887
	
	I0731 20:35:20.508514 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.511422 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.511787 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.511824 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.512032 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:20.512274 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.512450 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.512591 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:20.512778 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:20.513005 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:35:20.513029 1118228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-430887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-430887/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-430887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:35:20.617354 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:35:20.617388 1118228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:35:20.617425 1118228 buildroot.go:174] setting up certificates
	I0731 20:35:20.617435 1118228 provision.go:84] configureAuth start
	I0731 20:35:20.617446 1118228 main.go:141] libmachine: (ha-430887) Calling .GetMachineName
	I0731 20:35:20.617752 1118228 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:35:20.620579 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.620983 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.621007 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.621207 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.623498 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.623818 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.623838 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.623997 1118228 provision.go:143] copyHostCerts
	I0731 20:35:20.624037 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:35:20.624081 1118228 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:35:20.624105 1118228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:35:20.624184 1118228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:35:20.624303 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:35:20.624333 1118228 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:35:20.624341 1118228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:35:20.624386 1118228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:35:20.624450 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:35:20.624482 1118228 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:35:20.624501 1118228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:35:20.624539 1118228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:35:20.624610 1118228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.ha-430887 san=[127.0.0.1 192.168.39.195 ha-430887 localhost minikube]
	I0731 20:35:20.936480 1118228 provision.go:177] copyRemoteCerts
	I0731 20:35:20.936550 1118228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:35:20.936576 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:20.939130 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.939395 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:20.939421 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:20.939612 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:20.939835 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:20.940005 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:20.940186 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:35:21.021942 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:35:21.022028 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 20:35:21.044974 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:35:21.045045 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:35:21.067902 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:35:21.067975 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:35:21.090490 1118228 provision.go:87] duration metric: took 473.039314ms to configureAuth
	I0731 20:35:21.090520 1118228 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:35:21.090731 1118228 config.go:182] Loaded profile config "ha-430887": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:35:21.090805 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:35:21.093360 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:21.093727 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:35:21.093758 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:35:21.093909 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:35:21.094136 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:21.094308 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:35:21.094422 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:35:21.094579 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:21.094749 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:35:21.094762 1118228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:36:51.808776 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:36:51.808809 1118228 machine.go:97] duration metric: took 1m31.531172246s to provisionDockerMachine
	I0731 20:36:51.808825 1118228 start.go:293] postStartSetup for "ha-430887" (driver="kvm2")
	I0731 20:36:51.808837 1118228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:36:51.808862 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:51.809229 1118228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:36:51.809259 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:51.812520 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:51.813018 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:51.813054 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:51.813224 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:51.813416 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:51.813584 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:51.813703 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:36:51.894393 1118228 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:36:51.898662 1118228 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:36:51.898691 1118228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 20:36:51.898761 1118228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 20:36:51.898849 1118228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 20:36:51.898865 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 20:36:51.898959 1118228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:36:51.908067 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:36:51.933264 1118228 start.go:296] duration metric: took 124.426167ms for postStartSetup
	I0731 20:36:51.933311 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:51.933628 1118228 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 20:36:51.933657 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:51.936398 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:51.936743 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:51.936768 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:51.936987 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:51.937194 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:51.937360 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:51.937500 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	W0731 20:36:52.017337 1118228 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 20:36:52.017368 1118228 fix.go:56] duration metric: took 1m31.761980229s for fixHost
	I0731 20:36:52.017396 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:52.020253 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.020633 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:52.020662 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.020834 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:52.021024 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:52.021175 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:52.021298 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:52.021452 1118228 main.go:141] libmachine: Using SSH client type: native
	I0731 20:36:52.021627 1118228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0731 20:36:52.021637 1118228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:36:52.124235 1118228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722458212.080094625
	
	I0731 20:36:52.124261 1118228 fix.go:216] guest clock: 1722458212.080094625
	I0731 20:36:52.124271 1118228 fix.go:229] Guest: 2024-07-31 20:36:52.080094625 +0000 UTC Remote: 2024-07-31 20:36:52.017377706 +0000 UTC m=+91.893847600 (delta=62.716919ms)
	I0731 20:36:52.124300 1118228 fix.go:200] guest clock delta is within tolerance: 62.716919ms
	I0731 20:36:52.124308 1118228 start.go:83] releasing machines lock for "ha-430887", held for 1m31.868937112s
	I0731 20:36:52.124334 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:52.124618 1118228 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:36:52.127021 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.127368 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:52.127389 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.127640 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:52.128194 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:52.128370 1118228 main.go:141] libmachine: (ha-430887) Calling .DriverName
	I0731 20:36:52.128441 1118228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:36:52.128482 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:52.128578 1118228 ssh_runner.go:195] Run: cat /version.json
	I0731 20:36:52.128600 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHHostname
	I0731 20:36:52.131010 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.131144 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.131390 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:52.131414 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.131512 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:52.131648 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:52.131683 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:52.131715 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:52.131806 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHPort
	I0731 20:36:52.131894 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:52.131972 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHKeyPath
	I0731 20:36:52.132034 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:36:52.132132 1118228 main.go:141] libmachine: (ha-430887) Calling .GetSSHUsername
	I0731 20:36:52.132256 1118228 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/ha-430887/id_rsa Username:docker}
	I0731 20:36:52.212329 1118228 ssh_runner.go:195] Run: systemctl --version
	I0731 20:36:52.236778 1118228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:36:52.390764 1118228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:36:52.402229 1118228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:36:52.402296 1118228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:36:52.411228 1118228 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 20:36:52.411249 1118228 start.go:495] detecting cgroup driver to use...
	I0731 20:36:52.411309 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:36:52.427792 1118228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:36:52.441147 1118228 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:36:52.441194 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:36:52.453976 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:36:52.466822 1118228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:36:52.630407 1118228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:36:52.771840 1118228 docker.go:233] disabling docker service ...
	I0731 20:36:52.771919 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:36:52.787172 1118228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:36:52.799429 1118228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:36:52.939477 1118228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:36:53.078755 1118228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:36:53.091991 1118228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:36:53.108885 1118228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:36:53.108952 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.118192 1118228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:36:53.118249 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.127620 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.136815 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.145845 1118228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:36:53.154961 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.163914 1118228 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.173710 1118228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:36:53.182916 1118228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:36:53.190958 1118228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:36:53.199237 1118228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:36:53.340424 1118228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:36:56.749061 1118228 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.408586374s)
	I0731 20:36:56.749099 1118228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:36:56.749169 1118228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:36:56.754456 1118228 start.go:563] Will wait 60s for crictl version
	I0731 20:36:56.754519 1118228 ssh_runner.go:195] Run: which crictl
	I0731 20:36:56.757927 1118228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:36:56.794666 1118228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:36:56.794755 1118228 ssh_runner.go:195] Run: crio --version
	I0731 20:36:56.820027 1118228 ssh_runner.go:195] Run: crio --version
	I0731 20:36:56.847412 1118228 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:36:56.848833 1118228 main.go:141] libmachine: (ha-430887) Calling .GetIP
	I0731 20:36:56.851389 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:56.851745 1118228 main.go:141] libmachine: (ha-430887) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:dc:43", ip: ""} in network mk-ha-430887: {Iface:virbr1 ExpiryTime:2024-07-31 21:25:32 +0000 UTC Type:0 Mac:52:54:00:10:dc:43 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-430887 Clientid:01:52:54:00:10:dc:43}
	I0731 20:36:56.851773 1118228 main.go:141] libmachine: (ha-430887) DBG | domain ha-430887 has defined IP address 192.168.39.195 and MAC address 52:54:00:10:dc:43 in network mk-ha-430887
	I0731 20:36:56.851967 1118228 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:36:56.856192 1118228 kubeadm.go:883] updating cluster {Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.83 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:36:56.856377 1118228 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:36:56.856438 1118228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:36:56.894628 1118228 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:36:56.894651 1118228 crio.go:433] Images already preloaded, skipping extraction
	I0731 20:36:56.894706 1118228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:36:56.925007 1118228 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:36:56.925032 1118228 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:36:56.925045 1118228 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.30.3 crio true true} ...
	I0731 20:36:56.925158 1118228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-430887 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:36:56.925236 1118228 ssh_runner.go:195] Run: crio config
	I0731 20:36:56.967715 1118228 cni.go:84] Creating CNI manager for ""
	I0731 20:36:56.967741 1118228 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 20:36:56.967750 1118228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:36:56.967782 1118228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-430887 NodeName:ha-430887 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:36:56.967917 1118228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-430887"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:36:56.967937 1118228 kube-vip.go:115] generating kube-vip config ...
	I0731 20:36:56.967979 1118228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 20:36:56.978318 1118228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 20:36:56.978428 1118228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 20:36:56.978505 1118228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:36:56.986784 1118228 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:36:56.986847 1118228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 20:36:56.994891 1118228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 20:36:57.009059 1118228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:36:57.023615 1118228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 20:36:57.037897 1118228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 20:36:57.054020 1118228 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 20:36:57.057305 1118228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:36:57.194514 1118228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:36:57.208407 1118228 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887 for IP: 192.168.39.195
	I0731 20:36:57.208442 1118228 certs.go:194] generating shared ca certs ...
	I0731 20:36:57.208462 1118228 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:36:57.208669 1118228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 20:36:57.208736 1118228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 20:36:57.208749 1118228 certs.go:256] generating profile certs ...
	I0731 20:36:57.208854 1118228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/client.key
	I0731 20:36:57.208888 1118228 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.221e426d
	I0731 20:36:57.208908 1118228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.221e426d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.149 192.168.39.44 192.168.39.254]
	I0731 20:36:57.438216 1118228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.221e426d ...
	I0731 20:36:57.438251 1118228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.221e426d: {Name:mkd60e10541584eec4c9989b951286c51783db93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:36:57.438427 1118228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.221e426d ...
	I0731 20:36:57.438439 1118228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.221e426d: {Name:mk0a8b3d414b20a472b716a1362fed3b3a750ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:36:57.438513 1118228 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt.221e426d -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt
	I0731 20:36:57.438651 1118228 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key.221e426d -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key
	I0731 20:36:57.438779 1118228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key
	I0731 20:36:57.438795 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:36:57.438808 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:36:57.438821 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:36:57.438834 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:36:57.438847 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:36:57.438860 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:36:57.438871 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:36:57.438881 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:36:57.438927 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 20:36:57.438958 1118228 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 20:36:57.438965 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 20:36:57.438984 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:36:57.439003 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:36:57.439025 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 20:36:57.439061 1118228 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 20:36:57.439087 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:36:57.439098 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 20:36:57.439107 1118228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 20:36:57.439721 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:36:57.463409 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:36:57.484436 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:36:57.505595 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 20:36:57.527046 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 20:36:57.547643 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:36:57.568233 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:36:57.589102 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/ha-430887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:36:57.610725 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:36:57.631375 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 20:36:57.651991 1118228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 20:36:57.678585 1118228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:36:57.705429 1118228 ssh_runner.go:195] Run: openssl version
	I0731 20:36:57.710597 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:36:57.719787 1118228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:36:57.723659 1118228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:36:57.723695 1118228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:36:57.729235 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:36:57.737308 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 20:36:57.746541 1118228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 20:36:57.750495 1118228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 20:36:57.750535 1118228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 20:36:57.755540 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 20:36:57.764357 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 20:36:57.773528 1118228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 20:36:57.777434 1118228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 20:36:57.777466 1118228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 20:36:57.782341 1118228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:36:57.790273 1118228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:36:57.794217 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:36:57.799129 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:36:57.803905 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:36:57.808787 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:36:57.813792 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:36:57.818602 1118228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:36:57.823533 1118228 kubeadm.go:392] StartCluster: {Name:ha-430887 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-430887 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.83 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:36:57.823650 1118228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:36:57.823697 1118228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:36:57.856982 1118228 cri.go:89] found id: "db8045a86010e8896c91369775fb85c60c3de20e1c43761bf04f45756ef5189c"
	I0731 20:36:57.857007 1118228 cri.go:89] found id: "b29e43e77e100722452dd2891e40117ca378f479f8a698dea015f68732a14711"
	I0731 20:36:57.857011 1118228 cri.go:89] found id: "033f180dfe3e3003ef7c66c3813e060312602c0cbfe718203e9a3a9617c19a4f"
	I0731 20:36:57.857014 1118228 cri.go:89] found id: "6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02"
	I0731 20:36:57.857017 1118228 cri.go:89] found id: "431be4d60e8829a9d862428d851f35a6f8b8c35f82db816a553c40efc5a761c9"
	I0731 20:36:57.857021 1118228 cri.go:89] found id: "a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a"
	I0731 20:36:57.857023 1118228 cri.go:89] found id: "63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d"
	I0731 20:36:57.857028 1118228 cri.go:89] found id: "2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115"
	I0731 20:36:57.857032 1118228 cri.go:89] found id: "87bc5b4c15b869d5c249b5376d8603386b19cae551c89413ab13db65e8987b94"
	I0731 20:36:57.857039 1118228 cri.go:89] found id: "03b10e7eedd37d3e5965658c20cbb51f7420d0c16625edeb6c6fe87f7961994a"
	I0731 20:36:57.857043 1118228 cri.go:89] found id: "019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756"
	I0731 20:36:57.857047 1118228 cri.go:89] found id: "5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338"
	I0731 20:36:57.857051 1118228 cri.go:89] found id: "31bfc4408c834cb4db3698c0ab2de83ba08878dc7aedbf78ae89882b0be2aab0"
	I0731 20:36:57.857054 1118228 cri.go:89] found id: ""
	I0731 20:36:57.857107 1118228 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.728097130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458547728075169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5c9d3f4-3d45-48ca-9934-139a97c51196 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.728611468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3b2d171-48e9-4ce5-b73d-f48c2404b0bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.728703675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3b2d171-48e9-4ce5-b73d-f48c2404b0bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.729305837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0f9d6d5314f828124074d1f8942d814ad229f24cd6043c6dd25457736d5ee8,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458308468015854,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722458268472775814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722458267469039756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458261463631372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5720ff2aa5d083902177ccc0a0d9fb72a54818ffdf2555b52374af4801a4d0f,PodSandboxId:2860c6703133aeaf94ee73650597080755fe705e0a88c5bafe98245e10bb64ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722458257575965455,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd405e99c37c92f096d02d53b1746380ce9b46f33c282225e1c3f54bf2ca96c,PodSandboxId:fa66d796b0c21e9a5861f1ea8885c6ba9fcc89d84bf04612f24de3904a4c9089,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722458238853740374,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380c723c996f1b4dd3c3fdf0d8cb6c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520,PodSandboxId:ff5c7461ce1e763578c38e07a162c23411d580eb076d5235f8fd8b54bb2d502d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722458224339542405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce,PodSandboxId:5e6ab10f8cba822d617ef6ae172f980d60eb19d44c74f40f3c0ff541e8704709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224379069901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d,PodSandboxId:499062c60ea08147d337be8c35b9c54d72f25dbfcc6a20f986c204fb4f39f647,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722458224275273223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1,PodSandboxId:da3887f33eff5ea2127d01fcb2e2785de06fee5d85c59e7e1baaa6f43b9b3f8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224325648519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b,PodSandboxId:62d4f1a1400045d76a0793b42450da5315cad90527b3d3e54a9c4d48ccba944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722458224194112115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6,PodSandboxId:9ad3244ebf70d7395ba87af8c58e58e0e8644c2155fdd759dabd59ee91fa7104,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722458224226701800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7
a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722458224159993645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d193
93b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722458224050524647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457728387872762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587826857015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kube
rnetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587459364853,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457575608884896,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457572328099829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722457550283072418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722457550254701021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3b2d171-48e9-4ce5-b73d-f48c2404b0bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.768291006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95930b52-92dd-48bc-9b5a-872e2fc78273 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.768397565Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95930b52-92dd-48bc-9b5a-872e2fc78273 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.769348099Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc46af92-acfd-47c0-902a-91a8b8fd4491 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.769771172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458547769745572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc46af92-acfd-47c0-902a-91a8b8fd4491 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.770270451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cd65560-de56-4850-9c2e-45d92e93d93d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.770385605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cd65560-de56-4850-9c2e-45d92e93d93d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.770774444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0f9d6d5314f828124074d1f8942d814ad229f24cd6043c6dd25457736d5ee8,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458308468015854,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722458268472775814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722458267469039756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458261463631372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5720ff2aa5d083902177ccc0a0d9fb72a54818ffdf2555b52374af4801a4d0f,PodSandboxId:2860c6703133aeaf94ee73650597080755fe705e0a88c5bafe98245e10bb64ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722458257575965455,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd405e99c37c92f096d02d53b1746380ce9b46f33c282225e1c3f54bf2ca96c,PodSandboxId:fa66d796b0c21e9a5861f1ea8885c6ba9fcc89d84bf04612f24de3904a4c9089,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722458238853740374,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380c723c996f1b4dd3c3fdf0d8cb6c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520,PodSandboxId:ff5c7461ce1e763578c38e07a162c23411d580eb076d5235f8fd8b54bb2d502d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722458224339542405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce,PodSandboxId:5e6ab10f8cba822d617ef6ae172f980d60eb19d44c74f40f3c0ff541e8704709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224379069901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d,PodSandboxId:499062c60ea08147d337be8c35b9c54d72f25dbfcc6a20f986c204fb4f39f647,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722458224275273223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1,PodSandboxId:da3887f33eff5ea2127d01fcb2e2785de06fee5d85c59e7e1baaa6f43b9b3f8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224325648519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b,PodSandboxId:62d4f1a1400045d76a0793b42450da5315cad90527b3d3e54a9c4d48ccba944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722458224194112115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6,PodSandboxId:9ad3244ebf70d7395ba87af8c58e58e0e8644c2155fdd759dabd59ee91fa7104,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722458224226701800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7
a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722458224159993645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d193
93b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722458224050524647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457728387872762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587826857015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kube
rnetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587459364853,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457575608884896,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457572328099829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722457550283072418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722457550254701021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cd65560-de56-4850-9c2e-45d92e93d93d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.809194379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be089d35-8aca-4088-90e2-9f7612e988cc name=/runtime.v1.RuntimeService/Version
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.809295139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be089d35-8aca-4088-90e2-9f7612e988cc name=/runtime.v1.RuntimeService/Version
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.810266495Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=800b2a57-b9bc-46d2-86e0-50a1151c5324 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.810827083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458547810802499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=800b2a57-b9bc-46d2-86e0-50a1151c5324 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.811474758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39bfdc67-8559-47e1-a9de-2f3d4f4609f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.811546192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39bfdc67-8559-47e1-a9de-2f3d4f4609f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.812015314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0f9d6d5314f828124074d1f8942d814ad229f24cd6043c6dd25457736d5ee8,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458308468015854,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722458268472775814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722458267469039756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458261463631372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5720ff2aa5d083902177ccc0a0d9fb72a54818ffdf2555b52374af4801a4d0f,PodSandboxId:2860c6703133aeaf94ee73650597080755fe705e0a88c5bafe98245e10bb64ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722458257575965455,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd405e99c37c92f096d02d53b1746380ce9b46f33c282225e1c3f54bf2ca96c,PodSandboxId:fa66d796b0c21e9a5861f1ea8885c6ba9fcc89d84bf04612f24de3904a4c9089,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722458238853740374,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380c723c996f1b4dd3c3fdf0d8cb6c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520,PodSandboxId:ff5c7461ce1e763578c38e07a162c23411d580eb076d5235f8fd8b54bb2d502d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722458224339542405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce,PodSandboxId:5e6ab10f8cba822d617ef6ae172f980d60eb19d44c74f40f3c0ff541e8704709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224379069901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d,PodSandboxId:499062c60ea08147d337be8c35b9c54d72f25dbfcc6a20f986c204fb4f39f647,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722458224275273223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1,PodSandboxId:da3887f33eff5ea2127d01fcb2e2785de06fee5d85c59e7e1baaa6f43b9b3f8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224325648519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b,PodSandboxId:62d4f1a1400045d76a0793b42450da5315cad90527b3d3e54a9c4d48ccba944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722458224194112115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6,PodSandboxId:9ad3244ebf70d7395ba87af8c58e58e0e8644c2155fdd759dabd59ee91fa7104,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722458224226701800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7
a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722458224159993645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d193
93b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722458224050524647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457728387872762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587826857015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kube
rnetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587459364853,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457575608884896,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457572328099829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722457550283072418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722457550254701021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39bfdc67-8559-47e1-a9de-2f3d4f4609f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.849416833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0993acd-858b-45d4-a935-c4d634ae0aa9 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.849567361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0993acd-858b-45d4-a935-c4d634ae0aa9 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.850893001Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0bb5a23-cbfc-4cf9-9425-d116040da542 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.851501834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458547851475759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0bb5a23-cbfc-4cf9-9425-d116040da542 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.852060145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=879b64ea-d1f4-476c-baf9-b7179029db80 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.852119260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=879b64ea-d1f4-476c-baf9-b7179029db80 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:42:27 ha-430887 crio[3774]: time="2024-07-31 20:42:27.852588463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ed0f9d6d5314f828124074d1f8942d814ad229f24cd6043c6dd25457736d5ee8,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458308468015854,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722458268472775814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722458267469039756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d19393b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369,PodSandboxId:434a21f7beec6edcabf4886bef19be1223d2c2f153c9bee9a39eaca97a127466,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458261463631372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb16097-a994-4b42-b876-ebe7d6022be6,},Annotations:map[string]string{io.kubernetes.container.hash: 114747d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5720ff2aa5d083902177ccc0a0d9fb72a54818ffdf2555b52374af4801a4d0f,PodSandboxId:2860c6703133aeaf94ee73650597080755fe705e0a88c5bafe98245e10bb64ef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722458257575965455,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annotations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd405e99c37c92f096d02d53b1746380ce9b46f33c282225e1c3f54bf2ca96c,PodSandboxId:fa66d796b0c21e9a5861f1ea8885c6ba9fcc89d84bf04612f24de3904a4c9089,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722458238853740374,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380c723c996f1b4dd3c3fdf0d8cb6c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520,PodSandboxId:ff5c7461ce1e763578c38e07a162c23411d580eb076d5235f8fd8b54bb2d502d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722458224339542405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce,PodSandboxId:5e6ab10f8cba822d617ef6ae172f980d60eb19d44c74f40f3c0ff541e8704709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224379069901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d,PodSandboxId:499062c60ea08147d337be8c35b9c54d72f25dbfcc6a20f986c204fb4f39f647,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722458224275273223,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1,PodSandboxId:da3887f33eff5ea2127d01fcb2e2785de06fee5d85c59e7e1baaa6f43b9b3f8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458224325648519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b,PodSandboxId:62d4f1a1400045d76a0793b42450da5315cad90527b3d3e54a9c4d48ccba944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722458224194112115,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6,PodSandboxId:9ad3244ebf70d7395ba87af8c58e58e0e8644c2155fdd759dabd59ee91fa7104,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722458224226701800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7
a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3,PodSandboxId:79acd5a39095a7abbdeb276e799bbd5e986f928a9c5b09f499104f3efdd3e286,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722458224159993645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7dc3b82901d193
93b1a5032c0de400,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381,PodSandboxId:f8f7b843226da27e5961cb3565a95e256f16fd857c9864d63e48802e4b19e980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722458224050524647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586dfd40543240aed00e0fd894b7ddbf,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3c25732f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61252be77d594a7e954c66d12af8c3c1cce75aada7650e557e2bbe365c1771f,PodSandboxId:94749dc3b8a0578cb66e0609ee481669ef129926c7719ce5c123f1ebaebad5ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457728387872762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tkmzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b668a1b0-4434-4037-a0a1-0461e748521d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 49f9b92f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02,PodSandboxId:364daaeb39b2a2d2750c0514b543d5abdb299d052456c485b332716cb1a97783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587826857015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tkm49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c751586-1fd3-4ebc-8d3f-602f3a70c3ac,},Annotations:map[string]string{io.kube
rnetes.container.hash: d266b3d8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a,PodSandboxId:c5096ff8ccf93c716cd97ab942b56547a47e51039b73dc22c686051d8a7e5c44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457587459364853,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rhlnq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a333762-0e0a-4a9a-bede-b6cf8a2b221c,},Annotations:map[string]string{io.kubernetes.container.hash: 1fb03862,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d,PodSandboxId:75a5e3ddf89ae6ecf0a813e8543ada8f34b0ad10847359a9eb3df1110c3021b8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457575608884896,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmjzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a3055d-bcf0-472f-b9f6-787e6f4499cb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cc25629,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115,PodSandboxId:45f974d9fa89f45c07fbf9c576a5f7b79a58dc42685896d0cf0a30af1148a5e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457572328099829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m49fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6686467c-0177-47b5-a286-cf718c901436,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd17406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756,PodSandboxId:e2bba8d22a3ce49b00806f23a21b6550c8d240acd6788195e6e1c3abe4a9198a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722457550283072418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35257eb5487c079f33eba6618833709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338,PodSandboxId:9da4629d918d33b0df1140b5513117c37f9760d217cec7d72c23536e3aa92cc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722457550254701021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-430887,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff059524622ab33693d7a7d489e8add,},Annotations:map[string]string{io.kubernetes.container.hash: 26889e88,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=879b64ea-d1f4-476c-baf9-b7179029db80 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ed0f9d6d5314f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   434a21f7beec6       storage-provisioner
	faa9efba25e3e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   f8f7b843226da       kube-apiserver-ha-430887
	a0c6cc7ab3ded       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   79acd5a39095a       kube-controller-manager-ha-430887
	34f2b676b4617       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   434a21f7beec6       storage-provisioner
	c5720ff2aa5d0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   2860c6703133a       busybox-fc5497c4f-tkmzn
	ccd405e99c37c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   fa66d796b0c21       kube-vip-ha-430887
	8dd05ed18c213       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   5e6ab10f8cba8       coredns-7db6d8ff4d-tkm49
	76b2da629018b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   ff5c7461ce1e7       kube-proxy-m49fz
	748dac0b04e4b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   da3887f33eff5       coredns-7db6d8ff4d-rhlnq
	3586b36e485e4       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   499062c60ea08       kindnet-xmjzn
	b254f1ebef43b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   9ad3244ebf70d       etcd-ha-430887
	104ea95fae730       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   62d4f1a140004       kube-scheduler-ha-430887
	a34eb23fafa7e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   79acd5a39095a       kube-controller-manager-ha-430887
	6e8f03aa65b75       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   f8f7b843226da       kube-apiserver-ha-430887
	b61252be77d59       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   94749dc3b8a05       busybox-fc5497c4f-tkmzn
	6804a88577bb9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   364daaeb39b2a       coredns-7db6d8ff4d-tkm49
	a3a604ebae38f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   c5096ff8ccf93       coredns-7db6d8ff4d-rhlnq
	63366667a98d5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   75a5e3ddf89ae       kindnet-xmjzn
	2c3cfe9da185a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   45f974d9fa89f       kube-proxy-m49fz
	019dbd42b381f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   e2bba8d22a3ce       kube-scheduler-ha-430887
	5d05fc1d45725       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   9da4629d918d3       etcd-ha-430887
	
	
	==> coredns [6804a88577bb93764f418e0ec12954c6cd85303fe7a3c4e169f7c4402b803a02] <==
	[INFO] 10.244.1.2:51933 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175849s
	[INFO] 10.244.1.2:36619 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118307s
	[INFO] 10.244.2.2:51012 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000102784s
	[INFO] 10.244.2.2:46299 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151507s
	[INFO] 10.244.2.2:32857 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075858s
	[INFO] 10.244.0.4:40942 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087643s
	[INFO] 10.244.0.4:34086 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001741525s
	[INFO] 10.244.0.4:52613 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051957s
	[INFO] 10.244.0.4:48069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001210819s
	[INFO] 10.244.1.2:57723 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084885s
	[INFO] 10.244.1.2:43800 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099387s
	[INFO] 10.244.2.2:48837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134956s
	[INFO] 10.244.2.2:46133 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008076s
	[INFO] 10.244.1.2:52179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123976s
	[INFO] 10.244.1.2:38064 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121703s
	[INFO] 10.244.2.2:38356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183387s
	[INFO] 10.244.2.2:45481 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000194275s
	[INFO] 10.244.2.2:42027 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138509s
	[INFO] 10.244.2.2:47364 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140763s
	[INFO] 10.244.0.4:57224 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075497s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [748dac0b04e4befbd28dcfdf92d7ba749dc980236ed137f8d4e8523ea0ce35e1] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[909917760]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 20:37:13.228) (total time: 10001ms):
	Trace[909917760]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:37:23.229)
	Trace[909917760]: [10.00156638s] [10.00156638s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46132->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46132->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8dd05ed18c21383600fc0a860b17cca75a5bb3b7401fd5daf627387d0796c7ce] <==
	Trace[1004538133]: [10.241951709s] [10.241951709s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39990->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39998->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[13707683]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 20:37:16.248) (total time: 10019ms):
	Trace[13707683]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39998->10.96.0.1:443: read: connection reset by peer 10019ms (20:37:26.267)
	Trace[13707683]: [10.019150778s] [10.019150778s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39998->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a3a604ebae38fd1a4ba628500a1e9d20e3ebb4f69c37930c53ae504f21bbe31a] <==
	[INFO] 10.244.0.4:35814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077793s
	[INFO] 10.244.0.4:57174 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050499s
	[INFO] 10.244.1.2:35721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152974s
	[INFO] 10.244.1.2:52365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099511s
	[INFO] 10.244.2.2:56276 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095649s
	[INFO] 10.244.2.2:33350 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089031s
	[INFO] 10.244.0.4:39526 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089609s
	[INFO] 10.244.0.4:32892 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000036988s
	[INFO] 10.244.0.4:54821 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028078s
	[INFO] 10.244.0.4:40693 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000023261s
	[INFO] 10.244.1.2:56760 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130165s
	[INFO] 10.244.1.2:49192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109643s
	[INFO] 10.244.0.4:55943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117823s
	[INFO] 10.244.0.4:40806 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010301s
	[INFO] 10.244.0.4:50703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076201s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1911&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[2113701684]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 20:35:08.117) (total time: 11133ms):
	Trace[2113701684]: ---"Objects listed" error:Unauthorized 11133ms (20:35:19.250)
	Trace[2113701684]: [11.133957975s] [11.133957975s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-430887
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_25_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:25:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:42:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:37:48 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:37:48 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:37:48 +0000   Wed, 31 Jul 2024 20:25:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:37:48 +0000   Wed, 31 Jul 2024 20:26:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-430887
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d983ecff48054665b7d9523d0704c9fc
	  System UUID:                d983ecff-4805-4665-b7d9-523d0704c9fc
	  Boot ID:                    713545a1-3d19-4194-8d69-3cd83a4e4967
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tkmzn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-rhlnq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-tkm49             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-430887                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-xmjzn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-430887             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-430887    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-m49fz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-430887             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-430887                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m39s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-430887 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-430887 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-430887 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-430887 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-430887 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-430887 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                    node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-430887 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Warning  ContainerGCFailed        5m32s (x2 over 6m32s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m34s                  node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-430887 event: Registered Node ha-430887 in Controller
	
	
	Name:               ha-430887-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_27_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:27:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:42:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:38:30 +0000   Wed, 31 Jul 2024 20:37:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:38:30 +0000   Wed, 31 Jul 2024 20:37:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:38:30 +0000   Wed, 31 Jul 2024 20:37:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:38:30 +0000   Wed, 31 Jul 2024 20:37:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-430887-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec9db720f1af4a7b8ddebc5f57826488
	  System UUID:                ec9db720-f1af-4a7b-8dde-bc5f57826488
	  Boot ID:                    c0cff76a-37e5-4ff7-a710-fedd91287908
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hhwcx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-430887-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-49h86                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-430887-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-430887-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-hsd92                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-430887-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-430887-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  Starting                 15m                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)    kubelet          Node ha-430887-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)    kubelet          Node ha-430887-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)    kubelet          Node ha-430887-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           15m                  node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  RegisteredNode           15m                  node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  NodeReady                15m                  kubelet          Node ha-430887-m02 status is now: NodeReady
	  Normal  RegisteredNode           13m                  node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  NodeNotReady             11m                  node-controller  Node ha-430887-m02 status is now: NodeNotReady
	  Normal  Starting                 5m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node ha-430887-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node ha-430887-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)  kubelet          Node ha-430887-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m34s                node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  RegisteredNode           4m25s                node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	  Normal  RegisteredNode           3m10s                node-controller  Node ha-430887-m02 event: Registered Node ha-430887-m02 in Controller
	
	
	Name:               ha-430887-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-430887-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=ha-430887
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_29_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:29:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-430887-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:40:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 20:39:40 +0000   Wed, 31 Jul 2024 20:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 20:39:40 +0000   Wed, 31 Jul 2024 20:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 20:39:40 +0000   Wed, 31 Jul 2024 20:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 20:39:40 +0000   Wed, 31 Jul 2024 20:40:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    ha-430887-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e62b3ad5cf6244ff98aa273667a5b995
	  System UUID:                e62b3ad5-cf62-44ff-98aa-273667a5b995
	  Boot ID:                    cd5876c1-2dfd-4349-a3a2-5e689fc17a20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vctvr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-gg2tl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-8cqlp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-430887-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-430887-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-430887-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-430887-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m34s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   NodeNotReady             3m54s                  node-controller  Node ha-430887-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-430887-m04 event: Registered Node ha-430887-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-430887-m04 has been rebooted, boot id: cd5876c1-2dfd-4349-a3a2-5e689fc17a20
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-430887-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-430887-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-430887-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m48s                  kubelet          Node ha-430887-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m48s                  kubelet          Node ha-430887-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-430887-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.396030] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.056539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053894] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.164850] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.142838] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.248524] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +3.814747] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.436744] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.058175] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.102873] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.077595] kauditd_printk_skb: 79 callbacks suppressed
	[Jul31 20:26] kauditd_printk_skb: 18 callbacks suppressed
	[ +24.630735] kauditd_printk_skb: 38 callbacks suppressed
	[Jul31 20:27] kauditd_printk_skb: 28 callbacks suppressed
	[Jul31 20:36] systemd-fstab-generator[3687]: Ignoring "noauto" option for root device
	[  +0.147345] systemd-fstab-generator[3699]: Ignoring "noauto" option for root device
	[  +0.161550] systemd-fstab-generator[3713]: Ignoring "noauto" option for root device
	[  +0.146078] systemd-fstab-generator[3725]: Ignoring "noauto" option for root device
	[  +0.256781] systemd-fstab-generator[3753]: Ignoring "noauto" option for root device
	[  +3.854286] systemd-fstab-generator[3862]: Ignoring "noauto" option for root device
	[Jul31 20:37] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.146598] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.051910] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.845025] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.600230] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [5d05fc1d45725cbb9197dc4f97d4add9580b53fd203830bcbed81f9b85403338] <==
	{"level":"info","ts":"2024-07-31T20:35:21.200919Z","caller":"traceutil/trace.go:171","msg":"trace[1612728749] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"618.987905ms","start":"2024-07-31T20:35:20.581928Z","end":"2024-07-31T20:35:21.200916Z","steps":["trace[1612728749] 'agreement among raft nodes before linearized reading'  (duration: 618.975056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:35:21.200932Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:35:20.581913Z","time spent":"619.014065ms","remote":"127.0.0.1:40510","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/07/31 20:35:21 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 20:35:21 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T20:35:21.206296Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6657042673363248962,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-31T20:35:21.266455Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:35:21.266508Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T20:35:21.266571Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T20:35:21.266689Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266726Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266746Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266773Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266827Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266875Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266887Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c1466f1ea1ac417e"}
	{"level":"info","ts":"2024-07-31T20:35:21.266892Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.2669Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.266927Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.267038Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.26712Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.26728Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.267315Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:35:21.269818Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-31T20:35:21.269943Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-31T20:35:21.269976Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-430887","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	
	
	==> etcd [b254f1ebef43b42817577251b8c0c6312924fba96a841d7136dc28b9f9b1ebf6] <==
	{"level":"info","ts":"2024-07-31T20:39:03.173826Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"324857e3fe6e5c62","to":"efe64029709f6fc1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-31T20:39:03.174018Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:03.174333Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"324857e3fe6e5c62","to":"efe64029709f6fc1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-31T20:39:03.174407Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:03.18162Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:03.185655Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"warn","ts":"2024-07-31T20:39:03.192815Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.44:33348","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-31T20:39:54.287388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 switched to configuration voters=(3623242536957402210 13926941075041960318)"}
	{"level":"info","ts":"2024-07-31T20:39:54.293583Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e260bcd32c6c8b35","local-member-id":"324857e3fe6e5c62","removed-remote-peer-id":"efe64029709f6fc1","removed-remote-peer-urls":["https://192.168.39.44:2380"]}
	{"level":"info","ts":"2024-07-31T20:39:54.293701Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"efe64029709f6fc1"}
	{"level":"warn","ts":"2024-07-31T20:39:54.293835Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:54.293885Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"efe64029709f6fc1"}
	{"level":"warn","ts":"2024-07-31T20:39:54.298957Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:54.299059Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:54.299383Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"warn","ts":"2024-07-31T20:39:54.299794Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1","error":"context canceled"}
	{"level":"warn","ts":"2024-07-31T20:39:54.299864Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"efe64029709f6fc1","error":"failed to read efe64029709f6fc1 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-31T20:39:54.299924Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"warn","ts":"2024-07-31T20:39:54.300056Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1","error":"context canceled"}
	{"level":"info","ts":"2024-07-31T20:39:54.300104Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:54.300187Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:54.300242Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"324857e3fe6e5c62","removed-remote-peer-id":"efe64029709f6fc1"}
	{"level":"info","ts":"2024-07-31T20:39:54.300318Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"324857e3fe6e5c62","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"efe64029709f6fc1"}
	{"level":"warn","ts":"2024-07-31T20:39:54.314656Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.44:53046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-31T20:39:54.315692Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.44:53042","server-name":"","error":"read tcp 192.168.39.195:2380->192.168.39.44:53042: read: connection reset by peer"}
	
	
	==> kernel <==
	 20:42:28 up 17 min,  0 users,  load average: 0.34, 0.26, 0.20
	Linux ha-430887 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3586b36e485e4625445936ab6460dbd0ab9487f07a0f66851cd912c00e09874d] <==
	I0731 20:41:45.209413       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:41:55.212102       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:41:55.212173       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:41:55.212294       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:41:55.212342       1 main.go:299] handling current node
	I0731 20:41:55.212355       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:41:55.212372       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:42:05.205219       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:42:05.205412       1 main.go:299] handling current node
	I0731 20:42:05.205467       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:42:05.205493       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:42:05.205630       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:42:05.205699       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:42:15.207926       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:42:15.208027       1 main.go:299] handling current node
	I0731 20:42:15.208057       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:42:15.208082       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:42:15.208258       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:42:15.208308       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:42:25.211366       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:42:25.211466       1 main.go:299] handling current node
	I0731 20:42:25.211505       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:42:25.211524       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:42:25.211653       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:42:25.211686       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [63366667a98d59f6fc711cfa8073c47448aa35e08665409efc576300358c163d] <==
	I0731 20:34:56.552873       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:34:56.552936       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:34:56.553058       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:34:56.553081       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:34:56.553161       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:34:56.553178       1 main.go:299] handling current node
	I0731 20:34:56.553190       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:34:56.553204       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:35:06.553256       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:35:06.553316       1 main.go:299] handling current node
	I0731 20:35:06.553329       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:35:06.553335       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:35:06.553463       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:35:06.553481       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:35:06.553531       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:35:06.553547       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	I0731 20:35:16.552777       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0731 20:35:16.552820       1 main.go:299] handling current node
	I0731 20:35:16.552838       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I0731 20:35:16.552846       1 main.go:322] Node ha-430887-m02 has CIDR [10.244.1.0/24] 
	I0731 20:35:16.552985       1 main.go:295] Handling node with IPs: map[192.168.39.44:{}]
	I0731 20:35:16.553015       1 main.go:322] Node ha-430887-m03 has CIDR [10.244.2.0/24] 
	I0731 20:35:16.553108       1 main.go:295] Handling node with IPs: map[192.168.39.83:{}]
	I0731 20:35:16.553185       1 main.go:322] Node ha-430887-m04 has CIDR [10.244.3.0/24] 
	E0731 20:35:19.248469       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [6e8f03aa65b756d5a3ca8ca22e9b4d7bacc2555bf176b3867f0fbbfbd96ab381] <==
	I0731 20:37:04.550318       1 options.go:221] external host was not specified, using 192.168.39.195
	I0731 20:37:04.553114       1 server.go:148] Version: v1.30.3
	I0731 20:37:04.553310       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:37:05.226461       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 20:37:05.250790       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 20:37:05.256105       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 20:37:05.258158       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 20:37:05.258329       1 instance.go:299] Using reconciler: lease
	W0731 20:37:25.226315       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0731 20:37:25.226519       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0731 20:37:25.259859       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [faa9efba25e3e7fd86b15292153a058fc3d7d98ce789b69a4381f53411517da9] <==
	I0731 20:37:50.095684       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 20:37:50.095880       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 20:37:50.193357       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 20:37:50.193764       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 20:37:50.198620       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 20:37:50.198692       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 20:37:50.198756       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 20:37:50.199193       1 aggregator.go:165] initial CRD sync complete...
	I0731 20:37:50.199203       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 20:37:50.199209       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 20:37:50.199216       1 cache.go:39] Caches are synced for autoregister controller
	I0731 20:37:50.198764       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 20:37:50.198773       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 20:37:50.199839       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	W0731 20:37:50.206998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.44]
	I0731 20:37:50.244394       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 20:37:50.248640       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 20:37:50.248668       1 policy_source.go:224] refreshing policies
	I0731 20:37:50.275807       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 20:37:50.308650       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:37:50.315836       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0731 20:37:50.320898       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 20:37:51.103870       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 20:37:51.435857       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.195 192.168.39.44]
	W0731 20:38:01.435926       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149 192.168.39.195]
	
	
	==> kube-controller-manager [a0c6cc7ab3dedbf3319d7830766de1e875d153746f7530bcddb227e96fef94a7] <==
	E0731 20:40:43.121900       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	E0731 20:40:43.121907       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	E0731 20:40:43.121912       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	E0731 20:40:43.121917       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	I0731 20:40:43.243386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.949979ms"
	I0731 20:40:43.243992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.724µs"
	E0731 20:41:03.122050       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	E0731 20:41:03.122176       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	E0731 20:41:03.122206       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	E0731 20:41:03.122230       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	E0731 20:41:03.122254       1 gc_controller.go:153] "Failed to get node" err="node \"ha-430887-m03\" not found" logger="pod-garbage-collector-controller" node="ha-430887-m03"
	I0731 20:41:03.131481       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-430887-m03"
	I0731 20:41:03.158733       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-430887-m03"
	I0731 20:41:03.158814       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-430887-m03"
	I0731 20:41:03.182165       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-430887-m03"
	I0731 20:41:03.182436       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-430887-m03"
	I0731 20:41:03.208341       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-430887-m03"
	I0731 20:41:03.208428       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fbt5h"
	I0731 20:41:03.242662       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fbt5h"
	I0731 20:41:03.243241       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-430887-m03"
	I0731 20:41:03.272248       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-430887-m03"
	I0731 20:41:03.272281       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4mft2"
	I0731 20:41:03.296424       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4mft2"
	I0731 20:41:03.296846       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-430887-m03"
	I0731 20:41:03.320700       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-430887-m03"
	
	
	==> kube-controller-manager [a34eb23fafa7e0682aea117685481249296ff99dedb2e1c2de63438bba6962a3] <==
	I0731 20:37:05.588803       1 serving.go:380] Generated self-signed cert in-memory
	I0731 20:37:06.080670       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 20:37:06.080701       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:37:06.082020       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 20:37:06.082214       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0731 20:37:06.082268       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 20:37:06.082451       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0731 20:37:26.265264       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.195:8443/healthz\": dial tcp 192.168.39.195:8443: connect: connection refused"
	
	
	==> kube-proxy [2c3cfe9da185a052089fa0c6566579e254a013c410181ef004e7f63ccc43e115] <==
	E0731 20:34:04.448625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:04.448688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:04.448819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:11.360497       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:11.361831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:11.360857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:11.360919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:11.361965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:11.361942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:20.578176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:20.578668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:20.578439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:20.578742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:23.649406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:23.649482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:35.937622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:35.937759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:42.081089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:42.081416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:34:45.153408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:34:45.153461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-430887&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:35:03.585743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:35:03.585847       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1811": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 20:35:18.945205       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 20:35:18.945263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1885": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [76b2da629018bae06e94c466ffc762c15bccc085cb9ed7263ff3f56541d11520] <==
	I0731 20:37:05.525227       1 server_linux.go:69] "Using iptables proxy"
	E0731 20:37:06.465069       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 20:37:09.536632       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 20:37:12.608905       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 20:37:18.752563       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 20:37:31.040612       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-430887\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0731 20:37:49.024806       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	I0731 20:37:49.131206       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:37:49.134061       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:37:49.134236       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:37:49.169290       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:37:49.169496       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:37:49.169535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:37:49.172072       1 config.go:192] "Starting service config controller"
	I0731 20:37:49.172109       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:37:49.172261       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:37:49.172284       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:37:49.172812       1 config.go:319] "Starting node config controller"
	I0731 20:37:49.172844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:37:49.273569       1 shared_informer.go:320] Caches are synced for node config
	I0731 20:37:49.273736       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:37:49.273768       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [019dbd42b381f2d1bf4e89bd22d2327e954dd298b99f16d3e32a84b935298756] <==
	W0731 20:35:14.392120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 20:35:14.392212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 20:35:14.557344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 20:35:14.557421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 20:35:14.905006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:14.905088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:35:14.924984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 20:35:14.925053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 20:35:15.099406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 20:35:15.099496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 20:35:15.184070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:15.184168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 20:35:16.073162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:16.073306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 20:35:16.145853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 20:35:16.145959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:35:16.254745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 20:35:16.254900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 20:35:16.570986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:35:16.571086       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:35:16.620480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 20:35:16.620535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 20:35:20.622413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:20.622446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:35:21.194722       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [104ea95fae73065296831bc0c2b7a73d5570dc678c134726eccddd6f40a17d6b] <==
	W0731 20:37:43.082400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.195:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.082470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.195:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:43.569247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.195:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.569301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.195:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:43.823836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.195:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.823895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.195:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:43.924952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.195:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.925015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.195:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:43.949705       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:43.949760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:44.781199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.195:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:44.781246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.195:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:44.970676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:44.970812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:45.677534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.195:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:45.677583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.195:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:46.371569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.195:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:46.371671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.195:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:46.728564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:46.728600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:47.627873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:47.627929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	W0731 20:37:48.143815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.195:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0731 20:37:48.143929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.195:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	I0731 20:38:02.074658       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 20:38:27 ha-430887 kubelet[1378]: I0731 20:38:27.453229    1378 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-430887" podUID="516521a0-b217-407d-90ee-917c6cb6991a"
	Jul 31 20:38:27 ha-430887 kubelet[1378]: I0731 20:38:27.476183    1378 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-430887"
	Jul 31 20:38:27 ha-430887 kubelet[1378]: I0731 20:38:27.945795    1378 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-430887" podUID="516521a0-b217-407d-90ee-917c6cb6991a"
	Jul 31 20:38:28 ha-430887 kubelet[1378]: I0731 20:38:28.452966    1378 scope.go:117] "RemoveContainer" containerID="34f2b676b46174487332b004e82a79983e7012986d16b8bfbd38740b65d2e369"
	Jul 31 20:38:28 ha-430887 kubelet[1378]: I0731 20:38:28.969391    1378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-430887" podStartSLOduration=1.969368764 podStartE2EDuration="1.969368764s" podCreationTimestamp="2024-07-31 20:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 20:38:28.968501158 +0000 UTC m=+752.634793751" watchObservedRunningTime="2024-07-31 20:38:28.969368764 +0000 UTC m=+752.635661354"
	Jul 31 20:38:56 ha-430887 kubelet[1378]: E0731 20:38:56.467923    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:38:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:38:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:38:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:38:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:39:56 ha-430887 kubelet[1378]: E0731 20:39:56.467555    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:39:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:39:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:39:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:39:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:40:56 ha-430887 kubelet[1378]: E0731 20:40:56.466734    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:40:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:40:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:40:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:40:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:41:56 ha-430887 kubelet[1378]: E0731 20:41:56.466034    1378 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:41:56 ha-430887 kubelet[1378]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:41:56 ha-430887 kubelet[1378]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:41:56 ha-430887 kubelet[1378]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:41:56 ha-430887 kubelet[1378]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:42:27.463703 1120643 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19360-1093692/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-430887 -n ha-430887
helpers_test.go:261: (dbg) Run:  kubectl --context ha-430887 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-220043
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-220043
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-220043: exit status 82 (2m1.769845232s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-220043-m03"  ...
	* Stopping node "multinode-220043-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-220043" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-220043 --wait=true -v=8 --alsologtostderr
E0731 20:59:31.357761 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 21:02:00.018780 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-220043 --wait=true -v=8 --alsologtostderr: (3m20.755956209s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-220043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-220043 -n multinode-220043
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-220043 logs -n 25: (1.412244268s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m02:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3543853040/001/cp-test_multinode-220043-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m02:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043:/home/docker/cp-test_multinode-220043-m02_multinode-220043.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043 sudo cat                                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m02_multinode-220043.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m02:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03:/home/docker/cp-test_multinode-220043-m02_multinode-220043-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043-m03 sudo cat                                   | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m02_multinode-220043-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp testdata/cp-test.txt                                                | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3543853040/001/cp-test_multinode-220043-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043:/home/docker/cp-test_multinode-220043-m03_multinode-220043.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043 sudo cat                                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m03_multinode-220043.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02:/home/docker/cp-test_multinode-220043-m03_multinode-220043-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043-m02 sudo cat                                   | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m03_multinode-220043-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-220043 node stop m03                                                          | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	| node    | multinode-220043 node start                                                             | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-220043                                                                | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:57 UTC |                     |
	| stop    | -p multinode-220043                                                                     | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:57 UTC |                     |
	| start   | -p multinode-220043                                                                     | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:59 UTC | 31 Jul 24 21:02 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-220043                                                                | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 21:02 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:59:03
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:59:03.605333 1130033 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:59:03.605452 1130033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:59:03.605460 1130033 out.go:304] Setting ErrFile to fd 2...
	I0731 20:59:03.605464 1130033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:59:03.605646 1130033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:59:03.606197 1130033 out.go:298] Setting JSON to false
	I0731 20:59:03.607256 1130033 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":16895,"bootTime":1722442649,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:59:03.607322 1130033 start.go:139] virtualization: kvm guest
	I0731 20:59:03.609371 1130033 out.go:177] * [multinode-220043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:59:03.610691 1130033 notify.go:220] Checking for updates...
	I0731 20:59:03.610701 1130033 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:59:03.612401 1130033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:59:03.614132 1130033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:59:03.615311 1130033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:59:03.616528 1130033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:59:03.617759 1130033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:59:03.619830 1130033 config.go:182] Loaded profile config "multinode-220043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:03.619986 1130033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:59:03.620678 1130033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:59:03.620774 1130033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:03.636949 1130033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0731 20:59:03.637453 1130033 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:03.638082 1130033 main.go:141] libmachine: Using API Version  1
	I0731 20:59:03.638105 1130033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:03.638474 1130033 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:03.638675 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 20:59:03.675565 1130033 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:59:03.676772 1130033 start.go:297] selected driver: kvm2
	I0731 20:59:03.676784 1130033 start.go:901] validating driver "kvm2" against &{Name:multinode-220043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-220043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.66 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:03.676939 1130033 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:59:03.677378 1130033 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:59:03.677461 1130033 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:59:03.693539 1130033 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:59:03.694297 1130033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:59:03.694341 1130033 cni.go:84] Creating CNI manager for ""
	I0731 20:59:03.694349 1130033 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 20:59:03.694405 1130033 start.go:340] cluster config:
	{Name:multinode-220043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-220043 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.66 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:03.694532 1130033 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:59:03.696632 1130033 out.go:177] * Starting "multinode-220043" primary control-plane node in "multinode-220043" cluster
	I0731 20:59:03.697801 1130033 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:03.697848 1130033 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:59:03.697862 1130033 cache.go:56] Caching tarball of preloaded images
	I0731 20:59:03.697977 1130033 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:59:03.697990 1130033 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:59:03.698190 1130033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/config.json ...
	I0731 20:59:03.698461 1130033 start.go:360] acquireMachinesLock for multinode-220043: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:59:03.698519 1130033 start.go:364] duration metric: took 32.098µs to acquireMachinesLock for "multinode-220043"
	I0731 20:59:03.698555 1130033 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:03.698565 1130033 fix.go:54] fixHost starting: 
	I0731 20:59:03.698847 1130033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:59:03.698886 1130033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:03.714120 1130033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I0731 20:59:03.714587 1130033 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:03.715039 1130033 main.go:141] libmachine: Using API Version  1
	I0731 20:59:03.715065 1130033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:03.715373 1130033 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:03.715561 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 20:59:03.715759 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetState
	I0731 20:59:03.717313 1130033 fix.go:112] recreateIfNeeded on multinode-220043: state=Running err=<nil>
	W0731 20:59:03.717344 1130033 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:03.719137 1130033 out.go:177] * Updating the running kvm2 "multinode-220043" VM ...
	I0731 20:59:03.720311 1130033 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:03.720331 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 20:59:03.720561 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:03.723125 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.723524 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:03.723563 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.723667 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:03.723885 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.724048 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.724196 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:03.724372 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:03.724601 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 20:59:03.724615 1130033 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:03.832414 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-220043
	
	I0731 20:59:03.832448 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetMachineName
	I0731 20:59:03.832748 1130033 buildroot.go:166] provisioning hostname "multinode-220043"
	I0731 20:59:03.832785 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetMachineName
	I0731 20:59:03.833023 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:03.835701 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.836215 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:03.836261 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.836453 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:03.836675 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.836813 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.836953 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:03.837100 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:03.837311 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 20:59:03.837334 1130033 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-220043 && echo "multinode-220043" | sudo tee /etc/hostname
	I0731 20:59:03.962280 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-220043
	
	I0731 20:59:03.962318 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:03.965130 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.965457 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:03.965490 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.965725 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:03.965934 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.966154 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.966311 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:03.966495 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:03.966667 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 20:59:03.966683 1130033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-220043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-220043/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-220043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:04.077214 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:04.077246 1130033 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:59:04.077271 1130033 buildroot.go:174] setting up certificates
	I0731 20:59:04.077280 1130033 provision.go:84] configureAuth start
	I0731 20:59:04.077288 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetMachineName
	I0731 20:59:04.077592 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetIP
	I0731 20:59:04.080282 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.080648 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:04.080675 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.080853 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:04.083024 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.083347 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:04.083379 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.083509 1130033 provision.go:143] copyHostCerts
	I0731 20:59:04.083542 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:59:04.083582 1130033 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:59:04.083596 1130033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:59:04.083664 1130033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:59:04.083751 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:59:04.083770 1130033 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:59:04.083777 1130033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:59:04.083800 1130033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:59:04.083844 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:59:04.083861 1130033 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:59:04.083867 1130033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:59:04.083888 1130033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:59:04.083945 1130033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.multinode-220043 san=[127.0.0.1 192.168.39.184 localhost minikube multinode-220043]
	I0731 20:59:04.470974 1130033 provision.go:177] copyRemoteCerts
	I0731 20:59:04.471039 1130033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:04.471065 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:04.473767 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.474112 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:04.474152 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.474302 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:04.474501 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:04.474688 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:04.474838 1130033 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 20:59:04.560339 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:59:04.560419 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:04.587860 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:59:04.587926 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 20:59:04.611067 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:59:04.611153 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:04.645540 1130033 provision.go:87] duration metric: took 568.246433ms to configureAuth
	I0731 20:59:04.645572 1130033 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:04.645860 1130033 config.go:182] Loaded profile config "multinode-220043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:04.645958 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:04.648666 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.649023 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:04.649051 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.649282 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:04.649480 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:04.649677 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:04.649807 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:04.649973 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:04.650136 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 20:59:04.650153 1130033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:00:35.350542 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:00:35.350589 1130033 machine.go:97] duration metric: took 1m31.630262442s to provisionDockerMachine
	I0731 21:00:35.350607 1130033 start.go:293] postStartSetup for "multinode-220043" (driver="kvm2")
	I0731 21:00:35.350621 1130033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:00:35.350646 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.351032 1130033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:00:35.351067 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 21:00:35.354422 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.354932 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.354963 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.355150 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 21:00:35.355385 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.355577 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 21:00:35.355725 1130033 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 21:00:35.438303 1130033 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:00:35.443199 1130033 command_runner.go:130] > NAME=Buildroot
	I0731 21:00:35.443222 1130033 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 21:00:35.443226 1130033 command_runner.go:130] > ID=buildroot
	I0731 21:00:35.443236 1130033 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 21:00:35.443244 1130033 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 21:00:35.443424 1130033 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:00:35.443451 1130033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:00:35.443522 1130033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:00:35.443627 1130033 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:00:35.443642 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 21:00:35.443757 1130033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:00:35.453311 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:00:35.477218 1130033 start.go:296] duration metric: took 126.592093ms for postStartSetup
	I0731 21:00:35.477290 1130033 fix.go:56] duration metric: took 1m31.778724399s for fixHost
	I0731 21:00:35.477322 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 21:00:35.480346 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.480768 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.480803 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.480992 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 21:00:35.481237 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.481428 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.481609 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 21:00:35.481777 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 21:00:35.481992 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 21:00:35.482005 1130033 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:00:35.588955 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459635.575657523
	
	I0731 21:00:35.588977 1130033 fix.go:216] guest clock: 1722459635.575657523
	I0731 21:00:35.588985 1130033 fix.go:229] Guest: 2024-07-31 21:00:35.575657523 +0000 UTC Remote: 2024-07-31 21:00:35.477296456 +0000 UTC m=+91.911357802 (delta=98.361067ms)
	I0731 21:00:35.589006 1130033 fix.go:200] guest clock delta is within tolerance: 98.361067ms
	I0731 21:00:35.589012 1130033 start.go:83] releasing machines lock for "multinode-220043", held for 1m31.89046585s
	I0731 21:00:35.589034 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.589310 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetIP
	I0731 21:00:35.592082 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.592427 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.592453 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.592636 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.593209 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.593388 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.593479 1130033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:00:35.593527 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 21:00:35.593589 1130033 ssh_runner.go:195] Run: cat /version.json
	I0731 21:00:35.593629 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 21:00:35.596393 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.596420 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.596826 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.596860 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.596889 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.596908 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.597050 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 21:00:35.597161 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 21:00:35.597261 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.597345 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.597460 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 21:00:35.597529 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 21:00:35.597613 1130033 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 21:00:35.597695 1130033 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 21:00:35.676701 1130033 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 21:00:35.695004 1130033 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 21:00:35.695062 1130033 ssh_runner.go:195] Run: systemctl --version
	I0731 21:00:35.700669 1130033 command_runner.go:130] > systemd 252 (252)
	I0731 21:00:35.700721 1130033 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 21:00:35.700898 1130033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:00:35.850847 1130033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 21:00:35.856762 1130033 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 21:00:35.856947 1130033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:00:35.857022 1130033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:00:35.866402 1130033 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 21:00:35.866432 1130033 start.go:495] detecting cgroup driver to use...
	I0731 21:00:35.866511 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:00:35.884299 1130033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:00:35.898903 1130033 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:00:35.898977 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:00:35.913074 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:00:35.927057 1130033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:00:36.078703 1130033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:00:36.227129 1130033 docker.go:233] disabling docker service ...
	I0731 21:00:36.227218 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:00:36.248140 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:00:36.262929 1130033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:00:36.433507 1130033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:00:36.599208 1130033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:00:36.613185 1130033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:00:36.631581 1130033 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 21:00:36.631781 1130033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:00:36.631848 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.642609 1130033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:00:36.642691 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.654016 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.664773 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.675338 1130033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:00:36.686804 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.697317 1130033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.708198 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.718713 1130033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:00:36.728143 1130033 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 21:00:36.728255 1130033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:00:36.737740 1130033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:36.876157 1130033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:00:37.691313 1130033 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:00:37.691388 1130033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:00:37.696344 1130033 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 21:00:37.696372 1130033 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 21:00:37.696379 1130033 command_runner.go:130] > Device: 0,22	Inode: 1359        Links: 1
	I0731 21:00:37.696386 1130033 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 21:00:37.696391 1130033 command_runner.go:130] > Access: 2024-07-31 21:00:37.565882918 +0000
	I0731 21:00:37.696396 1130033 command_runner.go:130] > Modify: 2024-07-31 21:00:37.565882918 +0000
	I0731 21:00:37.696400 1130033 command_runner.go:130] > Change: 2024-07-31 21:00:37.565882918 +0000
	I0731 21:00:37.696404 1130033 command_runner.go:130] >  Birth: -
	I0731 21:00:37.696424 1130033 start.go:563] Will wait 60s for crictl version
	I0731 21:00:37.696480 1130033 ssh_runner.go:195] Run: which crictl
	I0731 21:00:37.700183 1130033 command_runner.go:130] > /usr/bin/crictl
	I0731 21:00:37.700265 1130033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:00:37.740221 1130033 command_runner.go:130] > Version:  0.1.0
	I0731 21:00:37.740249 1130033 command_runner.go:130] > RuntimeName:  cri-o
	I0731 21:00:37.740254 1130033 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 21:00:37.740260 1130033 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 21:00:37.741390 1130033 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:00:37.741475 1130033 ssh_runner.go:195] Run: crio --version
	I0731 21:00:37.769274 1130033 command_runner.go:130] > crio version 1.29.1
	I0731 21:00:37.769309 1130033 command_runner.go:130] > Version:        1.29.1
	I0731 21:00:37.769320 1130033 command_runner.go:130] > GitCommit:      unknown
	I0731 21:00:37.769325 1130033 command_runner.go:130] > GitCommitDate:  unknown
	I0731 21:00:37.769331 1130033 command_runner.go:130] > GitTreeState:   clean
	I0731 21:00:37.769339 1130033 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 21:00:37.769345 1130033 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 21:00:37.769350 1130033 command_runner.go:130] > Compiler:       gc
	I0731 21:00:37.769365 1130033 command_runner.go:130] > Platform:       linux/amd64
	I0731 21:00:37.769376 1130033 command_runner.go:130] > Linkmode:       dynamic
	I0731 21:00:37.769385 1130033 command_runner.go:130] > BuildTags:      
	I0731 21:00:37.769393 1130033 command_runner.go:130] >   containers_image_ostree_stub
	I0731 21:00:37.769400 1130033 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 21:00:37.769407 1130033 command_runner.go:130] >   btrfs_noversion
	I0731 21:00:37.769415 1130033 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 21:00:37.769422 1130033 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 21:00:37.769428 1130033 command_runner.go:130] >   seccomp
	I0731 21:00:37.769435 1130033 command_runner.go:130] > LDFlags:          unknown
	I0731 21:00:37.769443 1130033 command_runner.go:130] > SeccompEnabled:   true
	I0731 21:00:37.769450 1130033 command_runner.go:130] > AppArmorEnabled:  false
	I0731 21:00:37.769532 1130033 ssh_runner.go:195] Run: crio --version
	I0731 21:00:37.796723 1130033 command_runner.go:130] > crio version 1.29.1
	I0731 21:00:37.796750 1130033 command_runner.go:130] > Version:        1.29.1
	I0731 21:00:37.796758 1130033 command_runner.go:130] > GitCommit:      unknown
	I0731 21:00:37.796765 1130033 command_runner.go:130] > GitCommitDate:  unknown
	I0731 21:00:37.796771 1130033 command_runner.go:130] > GitTreeState:   clean
	I0731 21:00:37.796780 1130033 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 21:00:37.796785 1130033 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 21:00:37.796790 1130033 command_runner.go:130] > Compiler:       gc
	I0731 21:00:37.796796 1130033 command_runner.go:130] > Platform:       linux/amd64
	I0731 21:00:37.796803 1130033 command_runner.go:130] > Linkmode:       dynamic
	I0731 21:00:37.796823 1130033 command_runner.go:130] > BuildTags:      
	I0731 21:00:37.796833 1130033 command_runner.go:130] >   containers_image_ostree_stub
	I0731 21:00:37.796845 1130033 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 21:00:37.796851 1130033 command_runner.go:130] >   btrfs_noversion
	I0731 21:00:37.796869 1130033 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 21:00:37.796879 1130033 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 21:00:37.796886 1130033 command_runner.go:130] >   seccomp
	I0731 21:00:37.796893 1130033 command_runner.go:130] > LDFlags:          unknown
	I0731 21:00:37.796901 1130033 command_runner.go:130] > SeccompEnabled:   true
	I0731 21:00:37.796914 1130033 command_runner.go:130] > AppArmorEnabled:  false
	I0731 21:00:37.800869 1130033 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:00:37.802070 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetIP
	I0731 21:00:37.805152 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:37.805556 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:37.805591 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:37.805850 1130033 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:00:37.810072 1130033 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 21:00:37.810222 1130033 kubeadm.go:883] updating cluster {Name:multinode-220043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-220043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.66 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:00:37.810374 1130033 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:00:37.810421 1130033 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:00:37.855203 1130033 command_runner.go:130] > {
	I0731 21:00:37.855230 1130033 command_runner.go:130] >   "images": [
	I0731 21:00:37.855234 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855242 1130033 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 21:00:37.855247 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855253 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 21:00:37.855261 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855264 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855272 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 21:00:37.855279 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 21:00:37.855282 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855287 1130033 command_runner.go:130] >       "size": "87165492",
	I0731 21:00:37.855291 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855295 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855302 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855306 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855309 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855313 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855319 1130033 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 21:00:37.855326 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855331 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 21:00:37.855337 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855341 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855347 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 21:00:37.855357 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 21:00:37.855364 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855368 1130033 command_runner.go:130] >       "size": "87174707",
	I0731 21:00:37.855371 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855382 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855389 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855394 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855402 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855407 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855419 1130033 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 21:00:37.855429 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855436 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 21:00:37.855442 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855446 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855454 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 21:00:37.855463 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 21:00:37.855467 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855472 1130033 command_runner.go:130] >       "size": "1363676",
	I0731 21:00:37.855478 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855482 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855489 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855496 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855501 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855506 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855519 1130033 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 21:00:37.855528 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855536 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 21:00:37.855539 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855546 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855553 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 21:00:37.855575 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 21:00:37.855584 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855591 1130033 command_runner.go:130] >       "size": "31470524",
	I0731 21:00:37.855597 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855607 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855616 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855626 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855632 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855636 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855644 1130033 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 21:00:37.855651 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855656 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 21:00:37.855696 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855740 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855755 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 21:00:37.855766 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 21:00:37.855772 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855777 1130033 command_runner.go:130] >       "size": "61245718",
	I0731 21:00:37.855782 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855788 1130033 command_runner.go:130] >       "username": "nonroot",
	I0731 21:00:37.855796 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855805 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855812 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855820 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855830 1130033 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 21:00:37.855839 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855850 1130033 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 21:00:37.855857 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855867 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855877 1130033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 21:00:37.855891 1130033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 21:00:37.855900 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855911 1130033 command_runner.go:130] >       "size": "150779692",
	I0731 21:00:37.855920 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.855930 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.855938 1130033 command_runner.go:130] >       },
	I0731 21:00:37.855944 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855953 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855958 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855964 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855970 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855984 1130033 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 21:00:37.855999 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856010 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 21:00:37.856021 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856031 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856044 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 21:00:37.856055 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 21:00:37.856064 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856075 1130033 command_runner.go:130] >       "size": "117609954",
	I0731 21:00:37.856084 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.856109 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.856115 1130033 command_runner.go:130] >       },
	I0731 21:00:37.856125 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856132 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856142 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.856148 1130033 command_runner.go:130] >     },
	I0731 21:00:37.856157 1130033 command_runner.go:130] >     {
	I0731 21:00:37.856169 1130033 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 21:00:37.856175 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856184 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 21:00:37.856194 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856200 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856225 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 21:00:37.856241 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 21:00:37.856250 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856259 1130033 command_runner.go:130] >       "size": "112198984",
	I0731 21:00:37.856268 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.856278 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.856285 1130033 command_runner.go:130] >       },
	I0731 21:00:37.856291 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856296 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856301 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.856304 1130033 command_runner.go:130] >     },
	I0731 21:00:37.856309 1130033 command_runner.go:130] >     {
	I0731 21:00:37.856318 1130033 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 21:00:37.856324 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856332 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 21:00:37.856339 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856346 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856358 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 21:00:37.856369 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 21:00:37.856375 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856381 1130033 command_runner.go:130] >       "size": "85953945",
	I0731 21:00:37.856387 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.856391 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856398 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856413 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.856422 1130033 command_runner.go:130] >     },
	I0731 21:00:37.856430 1130033 command_runner.go:130] >     {
	I0731 21:00:37.856441 1130033 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 21:00:37.856451 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856466 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 21:00:37.856474 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856482 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856496 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 21:00:37.856511 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 21:00:37.856520 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856527 1130033 command_runner.go:130] >       "size": "63051080",
	I0731 21:00:37.856536 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.856556 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.856564 1130033 command_runner.go:130] >       },
	I0731 21:00:37.856572 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856579 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856588 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.856598 1130033 command_runner.go:130] >     },
	I0731 21:00:37.856606 1130033 command_runner.go:130] >     {
	I0731 21:00:37.856615 1130033 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 21:00:37.856625 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856636 1130033 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 21:00:37.856644 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856653 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856663 1130033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 21:00:37.856678 1130033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 21:00:37.856690 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856700 1130033 command_runner.go:130] >       "size": "750414",
	I0731 21:00:37.856709 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.856719 1130033 command_runner.go:130] >         "value": "65535"
	I0731 21:00:37.856728 1130033 command_runner.go:130] >       },
	I0731 21:00:37.856737 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856745 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856754 1130033 command_runner.go:130] >       "pinned": true
	I0731 21:00:37.856760 1130033 command_runner.go:130] >     }
	I0731 21:00:37.856766 1130033 command_runner.go:130] >   ]
	I0731 21:00:37.856770 1130033 command_runner.go:130] > }
	I0731 21:00:37.857056 1130033 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:00:37.857074 1130033 crio.go:433] Images already preloaded, skipping extraction
	I0731 21:00:37.857139 1130033 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:00:37.891943 1130033 command_runner.go:130] > {
	I0731 21:00:37.891974 1130033 command_runner.go:130] >   "images": [
	I0731 21:00:37.891980 1130033 command_runner.go:130] >     {
	I0731 21:00:37.891988 1130033 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 21:00:37.891993 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.891999 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 21:00:37.892004 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892008 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892017 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 21:00:37.892027 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 21:00:37.892032 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892043 1130033 command_runner.go:130] >       "size": "87165492",
	I0731 21:00:37.892051 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892059 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892070 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892078 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892082 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892097 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892110 1130033 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 21:00:37.892117 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892129 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 21:00:37.892138 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892145 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892157 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 21:00:37.892166 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 21:00:37.892172 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892183 1130033 command_runner.go:130] >       "size": "87174707",
	I0731 21:00:37.892196 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892213 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892222 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892231 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892239 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892246 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892256 1130033 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 21:00:37.892265 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892277 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 21:00:37.892287 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892297 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892309 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 21:00:37.892323 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 21:00:37.892332 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892339 1130033 command_runner.go:130] >       "size": "1363676",
	I0731 21:00:37.892344 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892352 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892375 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892386 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892390 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892396 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892408 1130033 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 21:00:37.892418 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892424 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 21:00:37.892430 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892437 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892452 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 21:00:37.892472 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 21:00:37.892481 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892492 1130033 command_runner.go:130] >       "size": "31470524",
	I0731 21:00:37.892502 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892510 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892514 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892523 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892531 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892540 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892552 1130033 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 21:00:37.892562 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892573 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 21:00:37.892592 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892600 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892620 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 21:00:37.892636 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 21:00:37.892644 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892654 1130033 command_runner.go:130] >       "size": "61245718",
	I0731 21:00:37.892663 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892672 1130033 command_runner.go:130] >       "username": "nonroot",
	I0731 21:00:37.892680 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892684 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892688 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892696 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892709 1130033 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 21:00:37.892719 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892729 1130033 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 21:00:37.892738 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892747 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892761 1130033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 21:00:37.892772 1130033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 21:00:37.892780 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892787 1130033 command_runner.go:130] >       "size": "150779692",
	I0731 21:00:37.892797 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.892806 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.892818 1130033 command_runner.go:130] >       },
	I0731 21:00:37.892828 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892837 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892846 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892853 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892857 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892869 1130033 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 21:00:37.892879 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892889 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 21:00:37.892898 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892908 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892923 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 21:00:37.892938 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 21:00:37.892945 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892950 1130033 command_runner.go:130] >       "size": "117609954",
	I0731 21:00:37.892955 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.892963 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.892969 1130033 command_runner.go:130] >       },
	I0731 21:00:37.892977 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892984 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892993 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.893001 1130033 command_runner.go:130] >     },
	I0731 21:00:37.893007 1130033 command_runner.go:130] >     {
	I0731 21:00:37.893019 1130033 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 21:00:37.893028 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.893033 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 21:00:37.893041 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893048 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.893074 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 21:00:37.893092 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 21:00:37.893097 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893104 1130033 command_runner.go:130] >       "size": "112198984",
	I0731 21:00:37.893110 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.893115 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.893118 1130033 command_runner.go:130] >       },
	I0731 21:00:37.893122 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.893127 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.893133 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.893139 1130033 command_runner.go:130] >     },
	I0731 21:00:37.893150 1130033 command_runner.go:130] >     {
	I0731 21:00:37.893164 1130033 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 21:00:37.893170 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.893181 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 21:00:37.893189 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893196 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.893207 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 21:00:37.893224 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 21:00:37.893232 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893241 1130033 command_runner.go:130] >       "size": "85953945",
	I0731 21:00:37.893250 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.893260 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.893269 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.893279 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.893287 1130033 command_runner.go:130] >     },
	I0731 21:00:37.893296 1130033 command_runner.go:130] >     {
	I0731 21:00:37.893304 1130033 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 21:00:37.893310 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.893319 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 21:00:37.893328 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893337 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.893352 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 21:00:37.893371 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 21:00:37.893380 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893387 1130033 command_runner.go:130] >       "size": "63051080",
	I0731 21:00:37.893391 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.893400 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.893406 1130033 command_runner.go:130] >       },
	I0731 21:00:37.893416 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.893426 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.893435 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.893443 1130033 command_runner.go:130] >     },
	I0731 21:00:37.893449 1130033 command_runner.go:130] >     {
	I0731 21:00:37.893459 1130033 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 21:00:37.893468 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.893473 1130033 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 21:00:37.893481 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893487 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.893501 1130033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 21:00:37.893521 1130033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 21:00:37.893526 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893532 1130033 command_runner.go:130] >       "size": "750414",
	I0731 21:00:37.893541 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.893550 1130033 command_runner.go:130] >         "value": "65535"
	I0731 21:00:37.893561 1130033 command_runner.go:130] >       },
	I0731 21:00:37.893568 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.893577 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.893584 1130033 command_runner.go:130] >       "pinned": true
	I0731 21:00:37.893593 1130033 command_runner.go:130] >     }
	I0731 21:00:37.893598 1130033 command_runner.go:130] >   ]
	I0731 21:00:37.893606 1130033 command_runner.go:130] > }
	I0731 21:00:37.893786 1130033 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:00:37.893804 1130033 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:00:37.893814 1130033 kubeadm.go:934] updating node { 192.168.39.184 8443 v1.30.3 crio true true} ...
	I0731 21:00:37.893988 1130033 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-220043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-220043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:00:37.894079 1130033 ssh_runner.go:195] Run: crio config
	I0731 21:00:37.927262 1130033 command_runner.go:130] ! time="2024-07-31 21:00:37.914158409Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 21:00:37.932848 1130033 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 21:00:37.939049 1130033 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 21:00:37.939088 1130033 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 21:00:37.939098 1130033 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 21:00:37.939102 1130033 command_runner.go:130] > #
	I0731 21:00:37.939109 1130033 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 21:00:37.939115 1130033 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 21:00:37.939121 1130033 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 21:00:37.939127 1130033 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 21:00:37.939131 1130033 command_runner.go:130] > # reload'.
	I0731 21:00:37.939137 1130033 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 21:00:37.939143 1130033 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 21:00:37.939151 1130033 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 21:00:37.939164 1130033 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 21:00:37.939178 1130033 command_runner.go:130] > [crio]
	I0731 21:00:37.939190 1130033 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 21:00:37.939199 1130033 command_runner.go:130] > # containers images, in this directory.
	I0731 21:00:37.939203 1130033 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 21:00:37.939212 1130033 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 21:00:37.939217 1130033 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 21:00:37.939224 1130033 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 21:00:37.939230 1130033 command_runner.go:130] > # imagestore = ""
	I0731 21:00:37.939237 1130033 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 21:00:37.939249 1130033 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 21:00:37.939261 1130033 command_runner.go:130] > storage_driver = "overlay"
	I0731 21:00:37.939273 1130033 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 21:00:37.939284 1130033 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 21:00:37.939304 1130033 command_runner.go:130] > storage_option = [
	I0731 21:00:37.939312 1130033 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 21:00:37.939315 1130033 command_runner.go:130] > ]
	I0731 21:00:37.939321 1130033 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 21:00:37.939330 1130033 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 21:00:37.939341 1130033 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 21:00:37.939350 1130033 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 21:00:37.939363 1130033 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 21:00:37.939373 1130033 command_runner.go:130] > # always happen on a node reboot
	I0731 21:00:37.939384 1130033 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 21:00:37.939400 1130033 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 21:00:37.939409 1130033 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 21:00:37.939419 1130033 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 21:00:37.939431 1130033 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 21:00:37.939445 1130033 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 21:00:37.939461 1130033 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 21:00:37.939470 1130033 command_runner.go:130] > # internal_wipe = true
	I0731 21:00:37.939481 1130033 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 21:00:37.939490 1130033 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 21:00:37.939496 1130033 command_runner.go:130] > # internal_repair = false
	I0731 21:00:37.939520 1130033 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 21:00:37.939532 1130033 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 21:00:37.939544 1130033 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 21:00:37.939557 1130033 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 21:00:37.939569 1130033 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 21:00:37.939575 1130033 command_runner.go:130] > [crio.api]
	I0731 21:00:37.939582 1130033 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 21:00:37.939592 1130033 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 21:00:37.939607 1130033 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 21:00:37.939616 1130033 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 21:00:37.939630 1130033 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 21:00:37.939641 1130033 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 21:00:37.939650 1130033 command_runner.go:130] > # stream_port = "0"
	I0731 21:00:37.939659 1130033 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 21:00:37.939666 1130033 command_runner.go:130] > # stream_enable_tls = false
	I0731 21:00:37.939675 1130033 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 21:00:37.939685 1130033 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 21:00:37.939699 1130033 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 21:00:37.939712 1130033 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 21:00:37.939721 1130033 command_runner.go:130] > # minutes.
	I0731 21:00:37.939730 1130033 command_runner.go:130] > # stream_tls_cert = ""
	I0731 21:00:37.939741 1130033 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 21:00:37.939750 1130033 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 21:00:37.939760 1130033 command_runner.go:130] > # stream_tls_key = ""
	I0731 21:00:37.939776 1130033 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 21:00:37.939789 1130033 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 21:00:37.939813 1130033 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 21:00:37.939822 1130033 command_runner.go:130] > # stream_tls_ca = ""
	I0731 21:00:37.939833 1130033 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 21:00:37.939841 1130033 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 21:00:37.939855 1130033 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 21:00:37.939867 1130033 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 21:00:37.939882 1130033 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 21:00:37.939895 1130033 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 21:00:37.939904 1130033 command_runner.go:130] > [crio.runtime]
	I0731 21:00:37.939916 1130033 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 21:00:37.939925 1130033 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 21:00:37.939935 1130033 command_runner.go:130] > # "nofile=1024:2048"
	I0731 21:00:37.939948 1130033 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 21:00:37.939959 1130033 command_runner.go:130] > # default_ulimits = [
	I0731 21:00:37.939967 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.939980 1130033 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 21:00:37.939990 1130033 command_runner.go:130] > # no_pivot = false
	I0731 21:00:37.940001 1130033 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 21:00:37.940010 1130033 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 21:00:37.940018 1130033 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 21:00:37.940031 1130033 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 21:00:37.940042 1130033 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 21:00:37.940055 1130033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 21:00:37.940065 1130033 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 21:00:37.940075 1130033 command_runner.go:130] > # Cgroup setting for conmon
	I0731 21:00:37.940087 1130033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 21:00:37.940108 1130033 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 21:00:37.940118 1130033 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 21:00:37.940130 1130033 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 21:00:37.940148 1130033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 21:00:37.940157 1130033 command_runner.go:130] > conmon_env = [
	I0731 21:00:37.940167 1130033 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 21:00:37.940172 1130033 command_runner.go:130] > ]
	I0731 21:00:37.940181 1130033 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 21:00:37.940192 1130033 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 21:00:37.940204 1130033 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 21:00:37.940213 1130033 command_runner.go:130] > # default_env = [
	I0731 21:00:37.940222 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.940234 1130033 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 21:00:37.940249 1130033 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 21:00:37.940255 1130033 command_runner.go:130] > # selinux = false
	I0731 21:00:37.940263 1130033 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 21:00:37.940277 1130033 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 21:00:37.940289 1130033 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 21:00:37.940299 1130033 command_runner.go:130] > # seccomp_profile = ""
	I0731 21:00:37.940310 1130033 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 21:00:37.940322 1130033 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 21:00:37.940335 1130033 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 21:00:37.940342 1130033 command_runner.go:130] > # which might increase security.
	I0731 21:00:37.940350 1130033 command_runner.go:130] > # This option is currently deprecated,
	I0731 21:00:37.940362 1130033 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 21:00:37.940373 1130033 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 21:00:37.940384 1130033 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 21:00:37.940396 1130033 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 21:00:37.940409 1130033 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 21:00:37.940421 1130033 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 21:00:37.940428 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.940434 1130033 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 21:00:37.940446 1130033 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 21:00:37.940457 1130033 command_runner.go:130] > # the cgroup blockio controller.
	I0731 21:00:37.940463 1130033 command_runner.go:130] > # blockio_config_file = ""
	I0731 21:00:37.940476 1130033 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 21:00:37.940483 1130033 command_runner.go:130] > # blockio parameters.
	I0731 21:00:37.940490 1130033 command_runner.go:130] > # blockio_reload = false
	I0731 21:00:37.940507 1130033 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 21:00:37.940514 1130033 command_runner.go:130] > # irqbalance daemon.
	I0731 21:00:37.940521 1130033 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 21:00:37.940538 1130033 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 21:00:37.940551 1130033 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 21:00:37.940564 1130033 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 21:00:37.940577 1130033 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 21:00:37.940590 1130033 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 21:00:37.940598 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.940602 1130033 command_runner.go:130] > # rdt_config_file = ""
	I0731 21:00:37.940613 1130033 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 21:00:37.940623 1130033 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 21:00:37.940654 1130033 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 21:00:37.940664 1130033 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 21:00:37.940674 1130033 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 21:00:37.940681 1130033 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 21:00:37.940687 1130033 command_runner.go:130] > # will be added.
	I0731 21:00:37.940693 1130033 command_runner.go:130] > # default_capabilities = [
	I0731 21:00:37.940702 1130033 command_runner.go:130] > # 	"CHOWN",
	I0731 21:00:37.940712 1130033 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 21:00:37.940722 1130033 command_runner.go:130] > # 	"FSETID",
	I0731 21:00:37.940732 1130033 command_runner.go:130] > # 	"FOWNER",
	I0731 21:00:37.940741 1130033 command_runner.go:130] > # 	"SETGID",
	I0731 21:00:37.940750 1130033 command_runner.go:130] > # 	"SETUID",
	I0731 21:00:37.940756 1130033 command_runner.go:130] > # 	"SETPCAP",
	I0731 21:00:37.940764 1130033 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 21:00:37.940767 1130033 command_runner.go:130] > # 	"KILL",
	I0731 21:00:37.940770 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.940784 1130033 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 21:00:37.940798 1130033 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 21:00:37.940809 1130033 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 21:00:37.940822 1130033 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 21:00:37.940834 1130033 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 21:00:37.940843 1130033 command_runner.go:130] > default_sysctls = [
	I0731 21:00:37.940852 1130033 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 21:00:37.940857 1130033 command_runner.go:130] > ]
	I0731 21:00:37.940864 1130033 command_runner.go:130] > # List of devices on the host that a
	I0731 21:00:37.940876 1130033 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 21:00:37.940886 1130033 command_runner.go:130] > # allowed_devices = [
	I0731 21:00:37.940895 1130033 command_runner.go:130] > # 	"/dev/fuse",
	I0731 21:00:37.940903 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.940910 1130033 command_runner.go:130] > # List of additional devices. specified as
	I0731 21:00:37.940926 1130033 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 21:00:37.940936 1130033 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 21:00:37.940950 1130033 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 21:00:37.940960 1130033 command_runner.go:130] > # additional_devices = [
	I0731 21:00:37.940968 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.940980 1130033 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 21:00:37.940989 1130033 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 21:00:37.940995 1130033 command_runner.go:130] > # 	"/etc/cdi",
	I0731 21:00:37.941005 1130033 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 21:00:37.941011 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.941021 1130033 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 21:00:37.941030 1130033 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 21:00:37.941038 1130033 command_runner.go:130] > # Defaults to false.
	I0731 21:00:37.941050 1130033 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 21:00:37.941063 1130033 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 21:00:37.941076 1130033 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 21:00:37.941084 1130033 command_runner.go:130] > # hooks_dir = [
	I0731 21:00:37.941095 1130033 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 21:00:37.941102 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.941109 1130033 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 21:00:37.941121 1130033 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 21:00:37.941131 1130033 command_runner.go:130] > # its default mounts from the following two files:
	I0731 21:00:37.941139 1130033 command_runner.go:130] > #
	I0731 21:00:37.941152 1130033 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 21:00:37.941167 1130033 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 21:00:37.941178 1130033 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 21:00:37.941187 1130033 command_runner.go:130] > #
	I0731 21:00:37.941196 1130033 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 21:00:37.941211 1130033 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 21:00:37.941225 1130033 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 21:00:37.941237 1130033 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 21:00:37.941244 1130033 command_runner.go:130] > #
	I0731 21:00:37.941251 1130033 command_runner.go:130] > # default_mounts_file = ""
	I0731 21:00:37.941263 1130033 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 21:00:37.941276 1130033 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 21:00:37.941282 1130033 command_runner.go:130] > pids_limit = 1024
	I0731 21:00:37.941291 1130033 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 21:00:37.941304 1130033 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 21:00:37.941317 1130033 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 21:00:37.941332 1130033 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 21:00:37.941341 1130033 command_runner.go:130] > # log_size_max = -1
	I0731 21:00:37.941355 1130033 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 21:00:37.941366 1130033 command_runner.go:130] > # log_to_journald = false
	I0731 21:00:37.941378 1130033 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 21:00:37.941389 1130033 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 21:00:37.941401 1130033 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 21:00:37.941412 1130033 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 21:00:37.941424 1130033 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 21:00:37.941433 1130033 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 21:00:37.941444 1130033 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 21:00:37.941451 1130033 command_runner.go:130] > # read_only = false
	I0731 21:00:37.941459 1130033 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 21:00:37.941473 1130033 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 21:00:37.941483 1130033 command_runner.go:130] > # live configuration reload.
	I0731 21:00:37.941490 1130033 command_runner.go:130] > # log_level = "info"
	I0731 21:00:37.941505 1130033 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 21:00:37.941516 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.941526 1130033 command_runner.go:130] > # log_filter = ""
	I0731 21:00:37.941534 1130033 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 21:00:37.941546 1130033 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 21:00:37.941556 1130033 command_runner.go:130] > # separated by comma.
	I0731 21:00:37.941571 1130033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 21:00:37.941579 1130033 command_runner.go:130] > # uid_mappings = ""
	I0731 21:00:37.941592 1130033 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 21:00:37.941604 1130033 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 21:00:37.941613 1130033 command_runner.go:130] > # separated by comma.
	I0731 21:00:37.941624 1130033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 21:00:37.941632 1130033 command_runner.go:130] > # gid_mappings = ""
	I0731 21:00:37.941645 1130033 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 21:00:37.941657 1130033 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 21:00:37.941669 1130033 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 21:00:37.941684 1130033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 21:00:37.941694 1130033 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 21:00:37.941705 1130033 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 21:00:37.941714 1130033 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 21:00:37.941726 1130033 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 21:00:37.941742 1130033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 21:00:37.941759 1130033 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 21:00:37.941771 1130033 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 21:00:37.941783 1130033 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 21:00:37.941792 1130033 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 21:00:37.941799 1130033 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 21:00:37.941808 1130033 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 21:00:37.941821 1130033 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 21:00:37.941832 1130033 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 21:00:37.941843 1130033 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 21:00:37.941851 1130033 command_runner.go:130] > drop_infra_ctr = false
	I0731 21:00:37.941863 1130033 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 21:00:37.941874 1130033 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 21:00:37.941883 1130033 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 21:00:37.941892 1130033 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 21:00:37.941907 1130033 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 21:00:37.941919 1130033 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 21:00:37.941931 1130033 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 21:00:37.941942 1130033 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 21:00:37.941951 1130033 command_runner.go:130] > # shared_cpuset = ""
	I0731 21:00:37.941961 1130033 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 21:00:37.941969 1130033 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 21:00:37.941979 1130033 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 21:00:37.941993 1130033 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 21:00:37.942003 1130033 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 21:00:37.942015 1130033 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 21:00:37.942028 1130033 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 21:00:37.942037 1130033 command_runner.go:130] > # enable_criu_support = false
	I0731 21:00:37.942045 1130033 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 21:00:37.942053 1130033 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 21:00:37.942060 1130033 command_runner.go:130] > # enable_pod_events = false
	I0731 21:00:37.942073 1130033 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 21:00:37.942085 1130033 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 21:00:37.942097 1130033 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 21:00:37.942106 1130033 command_runner.go:130] > # default_runtime = "runc"
	I0731 21:00:37.942117 1130033 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 21:00:37.942131 1130033 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 21:00:37.942144 1130033 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 21:00:37.942158 1130033 command_runner.go:130] > # creation as a file is not desired either.
	I0731 21:00:37.942174 1130033 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 21:00:37.942184 1130033 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 21:00:37.942194 1130033 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 21:00:37.942202 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.942215 1130033 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 21:00:37.942224 1130033 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 21:00:37.942237 1130033 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 21:00:37.942249 1130033 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 21:00:37.942258 1130033 command_runner.go:130] > #
	I0731 21:00:37.942266 1130033 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 21:00:37.942276 1130033 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 21:00:37.942303 1130033 command_runner.go:130] > # runtime_type = "oci"
	I0731 21:00:37.942311 1130033 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 21:00:37.942321 1130033 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 21:00:37.942332 1130033 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 21:00:37.942343 1130033 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 21:00:37.942352 1130033 command_runner.go:130] > # monitor_env = []
	I0731 21:00:37.942363 1130033 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 21:00:37.942375 1130033 command_runner.go:130] > # allowed_annotations = []
	I0731 21:00:37.942386 1130033 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 21:00:37.942392 1130033 command_runner.go:130] > # Where:
	I0731 21:00:37.942399 1130033 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 21:00:37.942412 1130033 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 21:00:37.942424 1130033 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 21:00:37.942436 1130033 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 21:00:37.942445 1130033 command_runner.go:130] > #   in $PATH.
	I0731 21:00:37.942458 1130033 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 21:00:37.942469 1130033 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 21:00:37.942478 1130033 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 21:00:37.942486 1130033 command_runner.go:130] > #   state.
	I0731 21:00:37.942505 1130033 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 21:00:37.942513 1130033 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 21:00:37.942522 1130033 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 21:00:37.942533 1130033 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 21:00:37.942545 1130033 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 21:00:37.942557 1130033 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 21:00:37.942569 1130033 command_runner.go:130] > #   The currently recognized values are:
	I0731 21:00:37.942582 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 21:00:37.942600 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 21:00:37.942612 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 21:00:37.942624 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 21:00:37.942638 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 21:00:37.942648 1130033 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 21:00:37.942660 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 21:00:37.942674 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 21:00:37.942688 1130033 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 21:00:37.942701 1130033 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 21:00:37.942712 1130033 command_runner.go:130] > #   deprecated option "conmon".
	I0731 21:00:37.942726 1130033 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 21:00:37.942734 1130033 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 21:00:37.942745 1130033 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 21:00:37.942755 1130033 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 21:00:37.942768 1130033 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 21:00:37.942779 1130033 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 21:00:37.942792 1130033 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 21:00:37.942803 1130033 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 21:00:37.942812 1130033 command_runner.go:130] > #
	I0731 21:00:37.942821 1130033 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 21:00:37.942825 1130033 command_runner.go:130] > #
	I0731 21:00:37.942836 1130033 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 21:00:37.942849 1130033 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 21:00:37.942857 1130033 command_runner.go:130] > #
	I0731 21:00:37.942870 1130033 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 21:00:37.942884 1130033 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 21:00:37.942892 1130033 command_runner.go:130] > #
	I0731 21:00:37.942904 1130033 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 21:00:37.942911 1130033 command_runner.go:130] > # feature.
	I0731 21:00:37.942914 1130033 command_runner.go:130] > #
	I0731 21:00:37.942926 1130033 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 21:00:37.942939 1130033 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 21:00:37.942952 1130033 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 21:00:37.942968 1130033 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 21:00:37.942980 1130033 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 21:00:37.942988 1130033 command_runner.go:130] > #
	I0731 21:00:37.942996 1130033 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 21:00:37.943006 1130033 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 21:00:37.943013 1130033 command_runner.go:130] > #
	I0731 21:00:37.943024 1130033 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 21:00:37.943036 1130033 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 21:00:37.943043 1130033 command_runner.go:130] > #
	I0731 21:00:37.943054 1130033 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 21:00:37.943067 1130033 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 21:00:37.943076 1130033 command_runner.go:130] > # limitation.
	I0731 21:00:37.943082 1130033 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 21:00:37.943091 1130033 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 21:00:37.943100 1130033 command_runner.go:130] > runtime_type = "oci"
	I0731 21:00:37.943110 1130033 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 21:00:37.943120 1130033 command_runner.go:130] > runtime_config_path = ""
	I0731 21:00:37.943138 1130033 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 21:00:37.943148 1130033 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 21:00:37.943157 1130033 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 21:00:37.943164 1130033 command_runner.go:130] > monitor_env = [
	I0731 21:00:37.943170 1130033 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 21:00:37.943177 1130033 command_runner.go:130] > ]
	I0731 21:00:37.943188 1130033 command_runner.go:130] > privileged_without_host_devices = false
	I0731 21:00:37.943201 1130033 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 21:00:37.943213 1130033 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 21:00:37.943226 1130033 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 21:00:37.943242 1130033 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 21:00:37.943253 1130033 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 21:00:37.943264 1130033 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 21:00:37.943282 1130033 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 21:00:37.943297 1130033 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 21:00:37.943306 1130033 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 21:00:37.943386 1130033 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 21:00:37.943426 1130033 command_runner.go:130] > # Example:
	I0731 21:00:37.943436 1130033 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 21:00:37.943444 1130033 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 21:00:37.943466 1130033 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 21:00:37.943474 1130033 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 21:00:37.943482 1130033 command_runner.go:130] > # cpuset = 0
	I0731 21:00:37.943487 1130033 command_runner.go:130] > # cpushares = "0-1"
	I0731 21:00:37.943491 1130033 command_runner.go:130] > # Where:
	I0731 21:00:37.943496 1130033 command_runner.go:130] > # The workload name is workload-type.
	I0731 21:00:37.943508 1130033 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 21:00:37.943518 1130033 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 21:00:37.943526 1130033 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 21:00:37.943539 1130033 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 21:00:37.943547 1130033 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 21:00:37.943558 1130033 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 21:00:37.943572 1130033 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 21:00:37.943588 1130033 command_runner.go:130] > # Default value is set to true
	I0731 21:00:37.943599 1130033 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 21:00:37.943611 1130033 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 21:00:37.943621 1130033 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 21:00:37.943631 1130033 command_runner.go:130] > # Default value is set to 'false'
	I0731 21:00:37.943638 1130033 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 21:00:37.943645 1130033 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 21:00:37.943651 1130033 command_runner.go:130] > #
	I0731 21:00:37.943657 1130033 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 21:00:37.943665 1130033 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 21:00:37.943673 1130033 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 21:00:37.943683 1130033 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 21:00:37.943691 1130033 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 21:00:37.943697 1130033 command_runner.go:130] > [crio.image]
	I0731 21:00:37.943703 1130033 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 21:00:37.943711 1130033 command_runner.go:130] > # default_transport = "docker://"
	I0731 21:00:37.943717 1130033 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 21:00:37.943725 1130033 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 21:00:37.943732 1130033 command_runner.go:130] > # global_auth_file = ""
	I0731 21:00:37.943737 1130033 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 21:00:37.943744 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.943749 1130033 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 21:00:37.943755 1130033 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 21:00:37.943762 1130033 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 21:00:37.943767 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.943777 1130033 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 21:00:37.943787 1130033 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 21:00:37.943800 1130033 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 21:00:37.943810 1130033 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 21:00:37.943818 1130033 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 21:00:37.943823 1130033 command_runner.go:130] > # pause_command = "/pause"
	I0731 21:00:37.943833 1130033 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 21:00:37.943841 1130033 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 21:00:37.943849 1130033 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 21:00:37.943863 1130033 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 21:00:37.943872 1130033 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 21:00:37.943880 1130033 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 21:00:37.943884 1130033 command_runner.go:130] > # pinned_images = [
	I0731 21:00:37.943888 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.943894 1130033 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 21:00:37.943903 1130033 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 21:00:37.943911 1130033 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 21:00:37.943919 1130033 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 21:00:37.943925 1130033 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 21:00:37.943930 1130033 command_runner.go:130] > # signature_policy = ""
	I0731 21:00:37.943935 1130033 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 21:00:37.943943 1130033 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 21:00:37.943950 1130033 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 21:00:37.943958 1130033 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 21:00:37.943964 1130033 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 21:00:37.943971 1130033 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 21:00:37.943977 1130033 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 21:00:37.943985 1130033 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 21:00:37.943992 1130033 command_runner.go:130] > # changing them here.
	I0731 21:00:37.943996 1130033 command_runner.go:130] > # insecure_registries = [
	I0731 21:00:37.944001 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.944007 1130033 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 21:00:37.944012 1130033 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 21:00:37.944016 1130033 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 21:00:37.944023 1130033 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 21:00:37.944027 1130033 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 21:00:37.944038 1130033 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 21:00:37.944045 1130033 command_runner.go:130] > # CNI plugins.
	I0731 21:00:37.944048 1130033 command_runner.go:130] > [crio.network]
	I0731 21:00:37.944057 1130033 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 21:00:37.944062 1130033 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 21:00:37.944068 1130033 command_runner.go:130] > # cni_default_network = ""
	I0731 21:00:37.944075 1130033 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 21:00:37.944082 1130033 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 21:00:37.944103 1130033 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 21:00:37.944115 1130033 command_runner.go:130] > # plugin_dirs = [
	I0731 21:00:37.944119 1130033 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 21:00:37.944125 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.944131 1130033 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 21:00:37.944136 1130033 command_runner.go:130] > [crio.metrics]
	I0731 21:00:37.944141 1130033 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 21:00:37.944148 1130033 command_runner.go:130] > enable_metrics = true
	I0731 21:00:37.944152 1130033 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 21:00:37.944159 1130033 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 21:00:37.944165 1130033 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 21:00:37.944177 1130033 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 21:00:37.944190 1130033 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 21:00:37.944199 1130033 command_runner.go:130] > # metrics_collectors = [
	I0731 21:00:37.944207 1130033 command_runner.go:130] > # 	"operations",
	I0731 21:00:37.944212 1130033 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 21:00:37.944218 1130033 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 21:00:37.944222 1130033 command_runner.go:130] > # 	"operations_errors",
	I0731 21:00:37.944229 1130033 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 21:00:37.944234 1130033 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 21:00:37.944240 1130033 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 21:00:37.944245 1130033 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 21:00:37.944251 1130033 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 21:00:37.944255 1130033 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 21:00:37.944262 1130033 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 21:00:37.944266 1130033 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 21:00:37.944272 1130033 command_runner.go:130] > # 	"containers_oom_total",
	I0731 21:00:37.944277 1130033 command_runner.go:130] > # 	"containers_oom",
	I0731 21:00:37.944282 1130033 command_runner.go:130] > # 	"processes_defunct",
	I0731 21:00:37.944286 1130033 command_runner.go:130] > # 	"operations_total",
	I0731 21:00:37.944291 1130033 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 21:00:37.944297 1130033 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 21:00:37.944301 1130033 command_runner.go:130] > # 	"operations_errors_total",
	I0731 21:00:37.944308 1130033 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 21:00:37.944314 1130033 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 21:00:37.944320 1130033 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 21:00:37.944325 1130033 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 21:00:37.944335 1130033 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 21:00:37.944341 1130033 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 21:00:37.944347 1130033 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 21:00:37.944353 1130033 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 21:00:37.944356 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.944363 1130033 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 21:00:37.944370 1130033 command_runner.go:130] > # metrics_port = 9090
	I0731 21:00:37.944375 1130033 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 21:00:37.944381 1130033 command_runner.go:130] > # metrics_socket = ""
	I0731 21:00:37.944386 1130033 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 21:00:37.944394 1130033 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 21:00:37.944401 1130033 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 21:00:37.944408 1130033 command_runner.go:130] > # certificate on any modification event.
	I0731 21:00:37.944414 1130033 command_runner.go:130] > # metrics_cert = ""
	I0731 21:00:37.944423 1130033 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 21:00:37.944430 1130033 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 21:00:37.944434 1130033 command_runner.go:130] > # metrics_key = ""
	I0731 21:00:37.944441 1130033 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 21:00:37.944448 1130033 command_runner.go:130] > [crio.tracing]
	I0731 21:00:37.944454 1130033 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 21:00:37.944460 1130033 command_runner.go:130] > # enable_tracing = false
	I0731 21:00:37.944465 1130033 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 21:00:37.944469 1130033 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 21:00:37.944477 1130033 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 21:00:37.944483 1130033 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 21:00:37.944488 1130033 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 21:00:37.944503 1130033 command_runner.go:130] > [crio.nri]
	I0731 21:00:37.944508 1130033 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 21:00:37.944515 1130033 command_runner.go:130] > # enable_nri = false
	I0731 21:00:37.944519 1130033 command_runner.go:130] > # NRI socket to listen on.
	I0731 21:00:37.944525 1130033 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 21:00:37.944530 1130033 command_runner.go:130] > # NRI plugin directory to use.
	I0731 21:00:37.944537 1130033 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 21:00:37.944543 1130033 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 21:00:37.944550 1130033 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 21:00:37.944556 1130033 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 21:00:37.944562 1130033 command_runner.go:130] > # nri_disable_connections = false
	I0731 21:00:37.944567 1130033 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 21:00:37.944577 1130033 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 21:00:37.944582 1130033 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 21:00:37.944589 1130033 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 21:00:37.944594 1130033 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 21:00:37.944601 1130033 command_runner.go:130] > [crio.stats]
	I0731 21:00:37.944609 1130033 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 21:00:37.944617 1130033 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 21:00:37.944622 1130033 command_runner.go:130] > # stats_collection_period = 0
	I0731 21:00:37.944751 1130033 cni.go:84] Creating CNI manager for ""
	I0731 21:00:37.944761 1130033 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 21:00:37.944770 1130033 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:00:37.944793 1130033 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.184 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-220043 NodeName:multinode-220043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:00:37.944959 1130033 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-220043"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:00:37.945039 1130033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:00:37.955083 1130033 command_runner.go:130] > kubeadm
	I0731 21:00:37.955111 1130033 command_runner.go:130] > kubectl
	I0731 21:00:37.955118 1130033 command_runner.go:130] > kubelet
	I0731 21:00:37.955141 1130033 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:00:37.955190 1130033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:00:37.965316 1130033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 21:00:37.981934 1130033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:00:37.999106 1130033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0731 21:00:38.016396 1130033 ssh_runner.go:195] Run: grep 192.168.39.184	control-plane.minikube.internal$ /etc/hosts
	I0731 21:00:38.020492 1130033 command_runner.go:130] > 192.168.39.184	control-plane.minikube.internal
	I0731 21:00:38.020669 1130033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:38.156809 1130033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:00:38.171786 1130033 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043 for IP: 192.168.39.184
	I0731 21:00:38.171813 1130033 certs.go:194] generating shared ca certs ...
	I0731 21:00:38.171837 1130033 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:38.172045 1130033 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:00:38.172118 1130033 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:00:38.172134 1130033 certs.go:256] generating profile certs ...
	I0731 21:00:38.172244 1130033 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/client.key
	I0731 21:00:38.172329 1130033 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.key.bba98ef8
	I0731 21:00:38.172370 1130033 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.key
	I0731 21:00:38.172382 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 21:00:38.172403 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 21:00:38.172421 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 21:00:38.172438 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 21:00:38.172453 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 21:00:38.172472 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 21:00:38.172491 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 21:00:38.172508 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 21:00:38.172594 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:00:38.172642 1130033 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:00:38.172655 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:00:38.172686 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:00:38.172717 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:00:38.172749 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:00:38.172803 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:00:38.172849 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.172870 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.172890 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.173579 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:00:38.197578 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:00:38.221931 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:00:38.248197 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:00:38.276634 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:00:38.303821 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:00:38.329821 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:00:38.353703 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:00:38.377229 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:00:38.401179 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:00:38.424074 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:00:38.448770 1130033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:00:38.467964 1130033 ssh_runner.go:195] Run: openssl version
	I0731 21:00:38.473611 1130033 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 21:00:38.473860 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:00:38.486279 1130033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.490657 1130033 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.490907 1130033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.490975 1130033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.496523 1130033 command_runner.go:130] > 3ec20f2e
	I0731 21:00:38.496885 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:00:38.507528 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:00:38.519695 1130033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.524099 1130033 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.524192 1130033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.524238 1130033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.529780 1130033 command_runner.go:130] > b5213941
	I0731 21:00:38.529863 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:00:38.540821 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:00:38.553100 1130033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.557992 1130033 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.558035 1130033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.558081 1130033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.563979 1130033 command_runner.go:130] > 51391683
	I0731 21:00:38.564183 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:00:38.575782 1130033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:00:38.580437 1130033 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:00:38.580466 1130033 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 21:00:38.580473 1130033 command_runner.go:130] > Device: 253,1	Inode: 533291      Links: 1
	I0731 21:00:38.580479 1130033 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 21:00:38.580485 1130033 command_runner.go:130] > Access: 2024-07-31 20:53:47.323432688 +0000
	I0731 21:00:38.580503 1130033 command_runner.go:130] > Modify: 2024-07-31 20:53:47.323432688 +0000
	I0731 21:00:38.580509 1130033 command_runner.go:130] > Change: 2024-07-31 20:53:47.323432688 +0000
	I0731 21:00:38.580514 1130033 command_runner.go:130] >  Birth: 2024-07-31 20:53:47.323432688 +0000
	I0731 21:00:38.580583 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:00:38.586563 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.586651 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:00:38.592638 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.592744 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:00:38.598372 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.598574 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:00:38.604259 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.604341 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:00:38.609641 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.609862 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:00:38.615675 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.615784 1130033 kubeadm.go:392] StartCluster: {Name:multinode-220043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-220043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.66 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:00:38.615899 1130033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:00:38.615950 1130033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:00:38.651337 1130033 command_runner.go:130] > 5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff
	I0731 21:00:38.651363 1130033 command_runner.go:130] > 84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d
	I0731 21:00:38.651369 1130033 command_runner.go:130] > 006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226
	I0731 21:00:38.651378 1130033 command_runner.go:130] > 3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250
	I0731 21:00:38.651383 1130033 command_runner.go:130] > 4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815
	I0731 21:00:38.651389 1130033 command_runner.go:130] > 42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b
	I0731 21:00:38.651393 1130033 command_runner.go:130] > a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e
	I0731 21:00:38.651400 1130033 command_runner.go:130] > 135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109
	I0731 21:00:38.651419 1130033 cri.go:89] found id: "5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff"
	I0731 21:00:38.651427 1130033 cri.go:89] found id: "84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d"
	I0731 21:00:38.651432 1130033 cri.go:89] found id: "006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226"
	I0731 21:00:38.651436 1130033 cri.go:89] found id: "3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250"
	I0731 21:00:38.651443 1130033 cri.go:89] found id: "4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815"
	I0731 21:00:38.651448 1130033 cri.go:89] found id: "42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b"
	I0731 21:00:38.651453 1130033 cri.go:89] found id: "a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e"
	I0731 21:00:38.651457 1130033 cri.go:89] found id: "135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109"
	I0731 21:00:38.651462 1130033 cri.go:89] found id: ""
	I0731 21:00:38.651511 1130033 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.027552546Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8c7c9ee-de68-43ec-96ae-2b9b4e0643e1 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.028433430Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0f76f94-3123-4202-8fb7-a7f18cca977b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.028878710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722459745028856690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0f76f94-3123-4202-8fb7-a7f18cca977b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.029547282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49083412-c633-49c6-a700-3afd93d42708 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.029600841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49083412-c633-49c6-a700-3afd93d42708 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.029991482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91fa2d31eb3bf57248ee8dee32d6626746acf8f99ec50be661d0d6af05d5ef1,PodSandboxId:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722459678360268911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d,PodSandboxId:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722459644888488667,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622,PodSandboxId:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459644755998288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89ed7025c9fb0599872797bcee031ebdacdc548b64f6a4dfc9319c6530efec8,PodSandboxId:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459644641777771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},An
notations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5,PodSandboxId:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459644569826829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea,PodSandboxId:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459640801561766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820,PodSandboxId:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459640837450629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1,PodSandboxId:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459640803911886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab,PodSandboxId:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459640749940745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b129e1cbb75cd30d5c3d067ab0cf62bc01bcd51ac769c473cf160d6eb7b13c10,PodSandboxId:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722459321241917993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff,PodSandboxId:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722459267020002295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d,PodSandboxId:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459266942781478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226,PodSandboxId:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722459255209920623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250,PodSandboxId:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722459251874507730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.kubernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815,PodSandboxId:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722459231673468827,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b,PodSandboxId:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9bc23dbd4bc101,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722459231669116557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e,PodSandboxId:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722459231661592972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,
},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109,PodSandboxId:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722459231499439736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49083412-c633-49c6-a700-3afd93d42708 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.068364104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29c152ef-09b6-411a-970e-2903bbdadb8d name=/runtime.v1.RuntimeService/Version
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.068460357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29c152ef-09b6-411a-970e-2903bbdadb8d name=/runtime.v1.RuntimeService/Version
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.069868498Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bfb48ee4-8f6b-4fb8-99a7-1ff04e4f11c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.070287690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722459745070266049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfb48ee4-8f6b-4fb8-99a7-1ff04e4f11c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.070778990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41a7309b-4028-43cc-9e34-7b156ceb8b41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.070846114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41a7309b-4028-43cc-9e34-7b156ceb8b41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.071182095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91fa2d31eb3bf57248ee8dee32d6626746acf8f99ec50be661d0d6af05d5ef1,PodSandboxId:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722459678360268911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d,PodSandboxId:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722459644888488667,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622,PodSandboxId:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459644755998288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89ed7025c9fb0599872797bcee031ebdacdc548b64f6a4dfc9319c6530efec8,PodSandboxId:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459644641777771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},An
notations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5,PodSandboxId:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459644569826829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea,PodSandboxId:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459640801561766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820,PodSandboxId:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459640837450629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1,PodSandboxId:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459640803911886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab,PodSandboxId:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459640749940745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b129e1cbb75cd30d5c3d067ab0cf62bc01bcd51ac769c473cf160d6eb7b13c10,PodSandboxId:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722459321241917993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff,PodSandboxId:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722459267020002295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d,PodSandboxId:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459266942781478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226,PodSandboxId:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722459255209920623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250,PodSandboxId:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722459251874507730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.kubernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815,PodSandboxId:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722459231673468827,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b,PodSandboxId:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9bc23dbd4bc101,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722459231669116557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e,PodSandboxId:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722459231661592972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,
},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109,PodSandboxId:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722459231499439736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41a7309b-4028-43cc-9e34-7b156ceb8b41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.120591333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87534038-c6af-4935-93b3-42fc8f530328 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.120727056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87534038-c6af-4935-93b3-42fc8f530328 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.122288783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=990a9ae2-663b-4b85-bb4a-755ce22a1d53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.122726292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722459745122703643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=990a9ae2-663b-4b85-bb4a-755ce22a1d53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.123492211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0202c682-a510-4c1d-8395-8a677682cab5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.123575880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0202c682-a510-4c1d-8395-8a677682cab5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.124054768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91fa2d31eb3bf57248ee8dee32d6626746acf8f99ec50be661d0d6af05d5ef1,PodSandboxId:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722459678360268911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d,PodSandboxId:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722459644888488667,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622,PodSandboxId:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459644755998288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89ed7025c9fb0599872797bcee031ebdacdc548b64f6a4dfc9319c6530efec8,PodSandboxId:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459644641777771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},An
notations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5,PodSandboxId:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459644569826829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea,PodSandboxId:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459640801561766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820,PodSandboxId:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459640837450629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1,PodSandboxId:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459640803911886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab,PodSandboxId:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459640749940745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b129e1cbb75cd30d5c3d067ab0cf62bc01bcd51ac769c473cf160d6eb7b13c10,PodSandboxId:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722459321241917993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff,PodSandboxId:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722459267020002295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d,PodSandboxId:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459266942781478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226,PodSandboxId:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722459255209920623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250,PodSandboxId:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722459251874507730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.kubernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815,PodSandboxId:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722459231673468827,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b,PodSandboxId:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9bc23dbd4bc101,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722459231669116557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e,PodSandboxId:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722459231661592972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,
},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109,PodSandboxId:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722459231499439736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0202c682-a510-4c1d-8395-8a677682cab5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.150928900Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=675a4a64-00c1-4816-af08-3c1eac5f5c40 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.151339122Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6q6qp,Uid:d932eb77-1509-4fc7-a3ab-7315556707b0,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459678218351844,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:00:44.086084578Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nl9gd,Uid:d4a24288-5134-4044-9ca6-a310ea329b72,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1722459644488250039,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:00:44.086072695Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1cf5142c-160e-4228-81ec-e47c2de8701d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459644453593792,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T21:00:44.086083287Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&PodSandboxMetadata{Name:kindnet-dnshn,Uid:096976bc-3005-4c8d-88a7-da32abefc439,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1722459644434641211,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:00:44.086078436Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-fk7mt,Uid:74dfe9b6-475f-4ff0-899a-24eaccd3540f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459644400086588,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:00:44.086081274Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-220043,Uid:41f86a014ebc23353f11c3fa77ec1699,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459640614247957,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 41f86a014ebc23353f11c3fa77ec1699,kubernetes.io/config.seen: 2024-07-31T21:00:40.081792063Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multi
node-220043,Uid:e19e708c02bfd2fbbc2583d15a2e1da3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459640605011678,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.184:8443,kubernetes.io/config.hash: e19e708c02bfd2fbbc2583d15a2e1da3,kubernetes.io/config.seen: 2024-07-31T21:00:40.081788872Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-220043,Uid:db6c6716326d3b720901c9a477dd8c3b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459640601516278,Labels:map[string]string{component: kube-controller-manager,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: db6c6716326d3b720901c9a477dd8c3b,kubernetes.io/config.seen: 2024-07-31T21:00:40.081790859Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-220043,Uid:83c07f69f3feae47ea13fe4158390271,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459640594171466,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.184:2379,kuberne
tes.io/config.hash: 83c07f69f3feae47ea13fe4158390271,kubernetes.io/config.seen: 2024-07-31T21:00:40.081735952Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6q6qp,Uid:d932eb77-1509-4fc7-a3ab-7315556707b0,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459319619586436,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:55:19.306600470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1cf5142c-160e-4228-81ec-e47c2de8701d,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1722459266804858566,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T20:54:26.493044778Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nl9gd,Uid:d4a24288-5134-4044-9ca6-a310ea329b72,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459266794821164,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:54:26.488424802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&PodSandboxMetadata{Name:kindnet-dnshn,Uid:096976bc-3005-4c8d-88a7-da32abefc439,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459251612290683,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:54:11.276366038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&PodSandboxMetadata{Name:kube-proxy-fk7mt,Uid:74dfe9b6-475f-4ff0-899a-24eaccd3540f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459251591887462,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:54:11.267520757Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&PodSandboxMetadata{Name:etcd-multinode-220043,Uid:83c07f69f3feae47ea13fe4158390271,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459231295051518,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.184:2379,kubernetes.io/config.hash: 83c07f69f3feae47ea13fe4158390271,kubernetes.io/config.seen: 2024-07-31T20:53:50.797201282Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9
bc23dbd4bc101,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-220043,Uid:db6c6716326d3b720901c9a477dd8c3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459231287624391,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: db6c6716326d3b720901c9a477dd8c3b,kubernetes.io/config.seen: 2024-07-31T20:53:50.797199336Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-220043,Uid:41f86a014ebc23353f11c3fa77ec1699,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459231269795541,Labels:map[string]string{component: kube-scheduler,io.kuberne
tes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 41f86a014ebc23353f11c3fa77ec1699,kubernetes.io/config.seen: 2024-07-31T20:53:50.797200273Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-220043,Uid:e19e708c02bfd2fbbc2583d15a2e1da3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459231267507977,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint
: 192.168.39.184:8443,kubernetes.io/config.hash: e19e708c02bfd2fbbc2583d15a2e1da3,kubernetes.io/config.seen: 2024-07-31T20:53:50.797194573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=675a4a64-00c1-4816-af08-3c1eac5f5c40 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.152093922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dba446fc-2560-44de-9074-c49098f063d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.152172127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dba446fc-2560-44de-9074-c49098f063d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:02:25 multinode-220043 crio[2868]: time="2024-07-31 21:02:25.152532583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91fa2d31eb3bf57248ee8dee32d6626746acf8f99ec50be661d0d6af05d5ef1,PodSandboxId:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722459678360268911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d,PodSandboxId:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722459644888488667,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622,PodSandboxId:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459644755998288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89ed7025c9fb0599872797bcee031ebdacdc548b64f6a4dfc9319c6530efec8,PodSandboxId:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459644641777771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},An
notations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5,PodSandboxId:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459644569826829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea,PodSandboxId:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459640801561766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820,PodSandboxId:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459640837450629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1,PodSandboxId:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459640803911886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab,PodSandboxId:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459640749940745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b129e1cbb75cd30d5c3d067ab0cf62bc01bcd51ac769c473cf160d6eb7b13c10,PodSandboxId:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722459321241917993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff,PodSandboxId:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722459267020002295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d,PodSandboxId:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459266942781478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226,PodSandboxId:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722459255209920623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250,PodSandboxId:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722459251874507730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.kubernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815,PodSandboxId:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722459231673468827,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b,PodSandboxId:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9bc23dbd4bc101,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722459231669116557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e,PodSandboxId:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722459231661592972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,
},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109,PodSandboxId:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722459231499439736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dba446fc-2560-44de-9074-c49098f063d5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e91fa2d31eb3b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   b2641b6a2dd07       busybox-fc5497c4f-6q6qp
	c68d47dc8c0a5       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   750c635ae9cb3       kindnet-dnshn
	acca3e1ed045c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   78b6e70cf4ae0       coredns-7db6d8ff4d-nl9gd
	e89ed7025c9fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   1c6cc2200999b       storage-provisioner
	f1075b8b2253e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   102cb9e816e11       kube-proxy-fk7mt
	677830e9554b3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   ed61727ff3063       etcd-multinode-220043
	8450b5d7a0ec4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   fe6268d8b75d3       kube-apiserver-multinode-220043
	bea8448ffa5ac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   cccc2114a9ae4       kube-scheduler-multinode-220043
	bc290d47eb9a6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   8f651a7dd37fc       kube-controller-manager-multinode-220043
	b129e1cbb75cd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   2146fff12e8f8       busybox-fc5497c4f-6q6qp
	5c8b4d91d3a89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   46d56e0cd6a93       coredns-7db6d8ff4d-nl9gd
	84a67e26466d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   764fe9a141516       storage-provisioner
	006d91418c209       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   705bafc71f35c       kindnet-dnshn
	3366da9a1a344       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   50d1ba3d1a7da       kube-proxy-fk7mt
	4789555cefe12       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   efcf0a24ebb92       kube-scheduler-multinode-220043
	42a835a7cd718       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   51a79137efba6       kube-controller-manager-multinode-220043
	a018ca65938ad       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   be0f244046475       etcd-multinode-220043
	135e3a794a671       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   a5bef938fe987       kube-apiserver-multinode-220043
	
	
	==> coredns [5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff] <==
	[INFO] 10.244.1.2:37910 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192437s
	[INFO] 10.244.1.2:48874 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159033s
	[INFO] 10.244.1.2:53899 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145334s
	[INFO] 10.244.1.2:45189 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001447127s
	[INFO] 10.244.1.2:56731 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073554s
	[INFO] 10.244.1.2:54665 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068413s
	[INFO] 10.244.1.2:35044 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069632s
	[INFO] 10.244.0.3:41195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076516s
	[INFO] 10.244.0.3:44592 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040464s
	[INFO] 10.244.0.3:53053 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033754s
	[INFO] 10.244.0.3:56475 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057414s
	[INFO] 10.244.1.2:60401 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103334s
	[INFO] 10.244.1.2:43267 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069481s
	[INFO] 10.244.1.2:46759 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066157s
	[INFO] 10.244.1.2:37235 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063279s
	[INFO] 10.244.0.3:36517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095833s
	[INFO] 10.244.0.3:59788 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094153s
	[INFO] 10.244.0.3:47975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086171s
	[INFO] 10.244.0.3:33465 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066933s
	[INFO] 10.244.1.2:59323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011084s
	[INFO] 10.244.1.2:40674 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164319s
	[INFO] 10.244.1.2:56217 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073787s
	[INFO] 10.244.1.2:44710 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063369s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56998 - 52418 "HINFO IN 360002067607903876.7109424447820596251. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020937782s
	
	
	==> describe nodes <==
	Name:               multinode-220043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-220043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=multinode-220043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_53_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:53:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-220043
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:02:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:00:43 +0000   Wed, 31 Jul 2024 20:53:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:00:43 +0000   Wed, 31 Jul 2024 20:53:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:00:43 +0000   Wed, 31 Jul 2024 20:53:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:00:43 +0000   Wed, 31 Jul 2024 20:54:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    multinode-220043
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc97b33a023c4b5f9cb1c356ee5766ba
	  System UUID:                bc97b33a-023c-4b5f-9cb1-c356ee5766ba
	  Boot ID:                    c6913746-254d-474c-a7f6-c153c0501375
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6q6qp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  kube-system                 coredns-7db6d8ff4d-nl9gd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m14s
	  kube-system                 etcd-multinode-220043                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m28s
	  kube-system                 kindnet-dnshn                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m14s
	  kube-system                 kube-apiserver-multinode-220043             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-controller-manager-multinode-220043    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-proxy-fk7mt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-scheduler-multinode-220043             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m12s                kube-proxy       
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m28s                kubelet          Node multinode-220043 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m28s                kubelet          Node multinode-220043 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s                kubelet          Node multinode-220043 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m28s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m15s                node-controller  Node multinode-220043 event: Registered Node multinode-220043 in Controller
	  Normal  NodeReady                7m59s                kubelet          Node multinode-220043 status is now: NodeReady
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node multinode-220043 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node multinode-220043 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node multinode-220043 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           88s                  node-controller  Node multinode-220043 event: Registered Node multinode-220043 in Controller
	
	
	Name:               multinode-220043-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-220043-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=multinode-220043
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T21_01_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:01:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-220043-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:02:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:01:55 +0000   Wed, 31 Jul 2024 21:01:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:01:55 +0000   Wed, 31 Jul 2024 21:01:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:01:55 +0000   Wed, 31 Jul 2024 21:01:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:01:55 +0000   Wed, 31 Jul 2024 21:01:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    multinode-220043-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 572fe7a56be640cc8f1e1a65d2fae511
	  System UUID:                572fe7a5-6be6-40cc-8f1e-1a65d2fae511
	  Boot ID:                    c96a58f6-6967-4e27-b614-e09a07a31b86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9l78d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-zrb57              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m28s
	  kube-system                 kube-proxy-dk6fj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m23s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m28s (x3 over 7m28s)  kubelet     Node multinode-220043-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s (x3 over 7m28s)  kubelet     Node multinode-220043-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s (x3 over 7m28s)  kubelet     Node multinode-220043-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m8s                   kubelet     Node multinode-220043-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-220043-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-220043-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-220043-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-220043-m02 status is now: NodeReady
	
	
	Name:               multinode-220043-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-220043-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=multinode-220043
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T21_02_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:02:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-220043-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:02:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:02:22 +0000   Wed, 31 Jul 2024 21:02:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:02:22 +0000   Wed, 31 Jul 2024 21:02:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:02:22 +0000   Wed, 31 Jul 2024 21:02:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:02:22 +0000   Wed, 31 Jul 2024 21:02:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    multinode-220043-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7749fe92e8414dc6adc067f3cb7155f6
	  System UUID:                7749fe92-e841-4dc6-adc0-67f3cb7155f6
	  Boot ID:                    3056cec3-0f8b-4246-8d8b-e8db8e288951
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8m9rx       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m33s
	  kube-system                 kube-proxy-rz5ws    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m29s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m40s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 6m34s)  kubelet     Node multinode-220043-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 6m34s)  kubelet     Node multinode-220043-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 6m34s)  kubelet     Node multinode-220043-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet     Node multinode-220043-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet     Node multinode-220043-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet     Node multinode-220043-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet     Node multinode-220043-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m26s                  kubelet     Node multinode-220043-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-220043-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-220043-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-220043-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-220043-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060748] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.213008] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.127614] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.303419] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.269470] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.069300] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.688816] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.553270] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.012011] systemd-fstab-generator[1282]: Ignoring "noauto" option for root device
	[  +0.085910] kauditd_printk_skb: 41 callbacks suppressed
	[Jul31 20:54] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.727618] systemd-fstab-generator[1472]: Ignoring "noauto" option for root device
	[  +5.151667] kauditd_printk_skb: 59 callbacks suppressed
	[Jul31 20:55] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 21:00] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.143454] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.205078] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.163344] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.286545] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +1.282766] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +1.821513] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +4.602021] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.664419] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.110837] systemd-fstab-generator[3906]: Ignoring "noauto" option for root device
	[Jul31 21:01] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820] <==
	{"level":"info","ts":"2024-07-31T21:00:41.224847Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:00:41.223843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-07-31T21:00:41.224961Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-07-31T21:00:41.223928Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T21:00:41.224039Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:00:41.225074Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:00:41.225105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:00:41.224397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea switched to configuration voters=(10993975698582176490)"}
	{"level":"info","ts":"2024-07-31T21:00:41.227993Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","added-peer-id":"989272a6374482ea","added-peer-peer-urls":["https://192.168.39.184:2380"]}
	{"level":"info","ts":"2024-07-31T21:00:41.22901Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:00:41.232611Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:00:42.644283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:00:42.644394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:00:42.644441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgPreVoteResp from 989272a6374482ea at term 2"}
	{"level":"info","ts":"2024-07-31T21:00:42.644471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:00:42.644553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgVoteResp from 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-07-31T21:00:42.644585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:00:42.64461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 989272a6374482ea elected leader 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-07-31T21:00:42.650978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:00:42.650935Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"989272a6374482ea","local-member-attributes":"{Name:multinode-220043 ClientURLs:[https://192.168.39.184:2379]}","request-path":"/0/members/989272a6374482ea/attributes","cluster-id":"e6ef3f762f24aa4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:00:42.652001Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:00:42.652228Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:00:42.652257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:00:42.65309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:00:42.653706Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.184:2379"}
	
	
	==> etcd [a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e] <==
	{"level":"info","ts":"2024-07-31T20:54:57.381137Z","caller":"traceutil/trace.go:171","msg":"trace[2027806418] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:462; }","duration":"126.83811ms","start":"2024-07-31T20:54:57.254252Z","end":"2024-07-31T20:54:57.38109Z","steps":["trace[2027806418] 'read index received'  (duration: 39.89µs)","trace[2027806418] 'applied index is now lower than readState.Index'  (duration: 126.796073ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:54:57.381434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.178429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-31T20:54:57.382673Z","caller":"traceutil/trace.go:171","msg":"trace[1317591629] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:441; }","duration":"128.442791ms","start":"2024-07-31T20:54:57.254208Z","end":"2024-07-31T20:54:57.382651Z","steps":["trace[1317591629] 'agreement among raft nodes before linearized reading'  (duration: 127.138732ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:55:52.066861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.012314ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9433511844067669587 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-220043-m03.17e767a78cbb4891\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-220043-m03.17e767a78cbb4891\" value_size:646 lease:210139807212893330 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:55:52.067137Z","caller":"traceutil/trace.go:171","msg":"trace[1896681757] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"222.308725ms","start":"2024-07-31T20:55:51.844813Z","end":"2024-07-31T20:55:52.067121Z","steps":["trace[1896681757] 'process raft request'  (duration: 65.982766ms)","trace[1896681757] 'compare'  (duration: 155.863359ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T20:55:52.067498Z","caller":"traceutil/trace.go:171","msg":"trace[481020760] linearizableReadLoop","detail":"{readStateIndex:611; appliedIndex:610; }","duration":"219.504489ms","start":"2024-07-31T20:55:51.847977Z","end":"2024-07-31T20:55:52.067481Z","steps":["trace[481020760] 'read index received'  (duration: 62.82736ms)","trace[481020760] 'applied index is now lower than readState.Index'  (duration: 156.676336ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:55:52.067724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.736696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T20:55:52.072499Z","caller":"traceutil/trace.go:171","msg":"trace[1291209307] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:577; }","duration":"224.520228ms","start":"2024-07-31T20:55:51.84795Z","end":"2024-07-31T20:55:52.07247Z","steps":["trace[1291209307] 'agreement among raft nodes before linearized reading'  (duration: 219.716784ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:55:52.071292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.847297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-220043-m03\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-31T20:55:52.072704Z","caller":"traceutil/trace.go:171","msg":"trace[1781781647] range","detail":"{range_begin:/registry/minions/multinode-220043-m03; range_end:; response_count:1; response_revision:577; }","duration":"125.287651ms","start":"2024-07-31T20:55:51.947406Z","end":"2024-07-31T20:55:52.072693Z","steps":["trace[1781781647] 'agreement among raft nodes before linearized reading'  (duration: 123.832611ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:55:52.073695Z","caller":"traceutil/trace.go:171","msg":"trace[134050814] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"170.240521ms","start":"2024-07-31T20:55:51.897207Z","end":"2024-07-31T20:55:52.067448Z","steps":["trace[134050814] 'process raft request'  (duration: 169.865371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:55:56.36123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.8194ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9433511844067669678 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.184\" mod_revision:564 > success:<request_put:<key:\"/registry/masterleases/192.168.39.184\" value_size:67 lease:210139807212893868 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.184\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:55:56.361398Z","caller":"traceutil/trace.go:171","msg":"trace[877367462] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"188.222659ms","start":"2024-07-31T20:55:56.173163Z","end":"2024-07-31T20:55:56.361385Z","steps":["trace[877367462] 'process raft request'  (duration: 65.189627ms)","trace[877367462] 'compare'  (duration: 122.750255ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:55:56.699187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.591046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-220043-m03\" ","response":"range_response_count:1 size:3228"}
	{"level":"info","ts":"2024-07-31T20:55:56.699461Z","caller":"traceutil/trace.go:171","msg":"trace[1160418171] range","detail":"{range_begin:/registry/minions/multinode-220043-m03; range_end:; response_count:1; response_revision:617; }","duration":"143.884863ms","start":"2024-07-31T20:55:56.555557Z","end":"2024-07-31T20:55:56.699442Z","steps":["trace[1160418171] 'range keys from in-memory index tree'  (duration: 143.42711ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:59:04.779054Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T20:59:04.779121Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-220043","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	{"level":"warn","ts":"2024-07-31T20:59:04.779184Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:59:04.779261Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:59:04.865691Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:59:04.865988Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T20:59:04.866104Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"989272a6374482ea","current-leader-member-id":"989272a6374482ea"}
	{"level":"info","ts":"2024-07-31T20:59:04.868559Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-07-31T20:59:04.868727Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-07-31T20:59:04.868804Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-220043","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	
	
	==> kernel <==
	 21:02:25 up 9 min,  0 users,  load average: 0.09, 0.11, 0.05
	Linux multinode-220043 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226] <==
	I0731 20:58:16.234244       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 20:58:26.241074       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 20:58:26.241179       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 20:58:26.241319       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 20:58:26.241340       1 main.go:299] handling current node
	I0731 20:58:26.241363       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:58:26.241379       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 20:58:36.241051       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:58:36.241094       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 20:58:36.241245       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 20:58:36.241264       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 20:58:36.241325       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 20:58:36.241345       1 main.go:299] handling current node
	I0731 20:58:46.242099       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 20:58:46.242212       1 main.go:299] handling current node
	I0731 20:58:46.242242       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:58:46.242261       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 20:58:46.242394       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 20:58:46.242415       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 20:58:56.241336       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 20:58:56.241442       1 main.go:299] handling current node
	I0731 20:58:56.241471       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:58:56.241489       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 20:58:56.241616       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 20:58:56.241660       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d] <==
	I0731 21:01:45.636194       1 main.go:299] handling current node
	I0731 21:01:55.642195       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:01:55.642325       1 main.go:299] handling current node
	I0731 21:01:55.642367       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:01:55.642394       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:01:55.642580       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 21:01:55.642621       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 21:02:05.635661       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:02:05.635695       1 main.go:299] handling current node
	I0731 21:02:05.635712       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:02:05.635717       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:02:05.635883       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 21:02:05.635906       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.2.0/24] 
	I0731 21:02:15.635818       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:02:15.635847       1 main.go:299] handling current node
	I0731 21:02:15.635861       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:02:15.635866       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:02:15.636050       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 21:02:15.636069       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.2.0/24] 
	I0731 21:02:25.636825       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:02:25.636885       1 main.go:299] handling current node
	I0731 21:02:25.636899       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:02:25.636904       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:02:25.637013       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 21:02:25.637017       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109] <==
	I0731 20:53:56.148809       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0731 20:53:56.156781       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.184]
	I0731 20:53:56.158101       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:53:56.163365       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 20:53:56.278571       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 20:53:57.189884       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 20:53:57.221795       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 20:53:57.238106       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 20:54:11.106648       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 20:54:11.177377       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0731 20:55:22.735083       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44402: use of closed network connection
	E0731 20:55:22.907921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44418: use of closed network connection
	E0731 20:55:23.093550       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44434: use of closed network connection
	E0731 20:55:23.265728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44462: use of closed network connection
	E0731 20:55:23.441663       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44482: use of closed network connection
	E0731 20:55:23.608544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44504: use of closed network connection
	E0731 20:55:23.915079       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44524: use of closed network connection
	E0731 20:55:24.087506       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44538: use of closed network connection
	E0731 20:55:24.258701       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44552: use of closed network connection
	E0731 20:55:24.428387       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44564: use of closed network connection
	I0731 20:59:04.777171       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0731 20:59:04.782275       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0731 20:59:04.791281       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0731 20:59:04.791717       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0731 20:59:04.809233       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1] <==
	I0731 21:00:43.829721       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0731 21:00:43.930045       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 21:00:43.937708       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 21:00:43.941950       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 21:00:43.942418       1 aggregator.go:165] initial CRD sync complete...
	I0731 21:00:43.942463       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 21:00:43.942481       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 21:00:43.942487       1 cache.go:39] Caches are synced for autoregister controller
	I0731 21:00:43.944205       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 21:00:43.944326       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 21:00:43.945330       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 21:00:43.950730       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 21:00:43.950840       1 policy_source.go:224] refreshing policies
	I0731 21:00:43.951097       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 21:00:43.951134       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 21:00:43.951715       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 21:00:43.965920       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 21:00:44.833253       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 21:00:46.078580       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 21:00:46.203313       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 21:00:46.219296       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 21:00:46.299488       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 21:00:46.306519       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 21:00:57.083786       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 21:00:57.119792       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b] <==
	I0731 20:54:57.431528       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m02\" does not exist"
	I0731 20:54:57.444344       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m02" podCIDRs=["10.244.1.0/24"]
	I0731 20:55:00.208503       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-220043-m02"
	I0731 20:55:17.027091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:55:19.299575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.431885ms"
	I0731 20:55:19.329708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.058576ms"
	I0731 20:55:19.344677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.553183ms"
	I0731 20:55:19.344911       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.836µs"
	I0731 20:55:21.412267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.757545ms"
	I0731 20:55:21.413447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.123µs"
	I0731 20:55:22.280956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.823307ms"
	I0731 20:55:22.281261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.243µs"
	I0731 20:55:52.069560       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m03\" does not exist"
	I0731 20:55:52.070007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:55:52.124451       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m03" podCIDRs=["10.244.2.0/24"]
	I0731 20:55:55.226898       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-220043-m03"
	I0731 20:56:10.440425       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m03"
	I0731 20:56:39.246026       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:56:40.239820       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:56:40.240908       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m03\" does not exist"
	I0731 20:56:40.260496       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m03" podCIDRs=["10.244.3.0/24"]
	I0731 20:56:59.084139       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:57:45.273453       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m03"
	I0731 20:57:45.331603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.085404ms"
	I0731 20:57:45.331837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.34µs"
	
	
	==> kube-controller-manager [bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab] <==
	I0731 21:00:57.773300       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 21:01:20.541592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.250559ms"
	I0731 21:01:20.541727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.856µs"
	I0731 21:01:20.558481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.523646ms"
	I0731 21:01:20.574811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.261321ms"
	I0731 21:01:20.575044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.569µs"
	I0731 21:01:24.986675       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m02\" does not exist"
	I0731 21:01:25.006659       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m02" podCIDRs=["10.244.1.0/24"]
	I0731 21:01:26.882820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.378µs"
	I0731 21:01:26.907176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.011µs"
	I0731 21:01:26.916560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.601µs"
	I0731 21:01:26.929584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.473µs"
	I0731 21:01:26.947393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.297µs"
	I0731 21:01:26.954326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.589µs"
	I0731 21:01:26.957293       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.634µs"
	I0731 21:01:44.081920       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 21:01:44.105553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.923µs"
	I0731 21:01:44.119106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.451µs"
	I0731 21:01:48.145289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.612178ms"
	I0731 21:01:48.145371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.707µs"
	I0731 21:02:02.421442       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 21:02:03.694904       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 21:02:03.695868       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m03\" does not exist"
	I0731 21:02:03.716097       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m03" podCIDRs=["10.244.2.0/24"]
	I0731 21:02:22.286253       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	
	
	==> kube-proxy [3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250] <==
	I0731 20:54:12.366294       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:54:12.387611       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	I0731 20:54:12.433493       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:54:12.433539       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:54:12.433556       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:54:12.439495       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:54:12.439913       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:54:12.439961       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:54:12.441865       1 config.go:192] "Starting service config controller"
	I0731 20:54:12.441971       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:54:12.442020       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:54:12.442038       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:54:12.443664       1 config.go:319] "Starting node config controller"
	I0731 20:54:12.443792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:54:12.542819       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:54:12.542974       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:54:12.544797       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5] <==
	I0731 21:00:44.782060       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:00:44.795358       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	I0731 21:00:44.885938       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:00:44.886000       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:00:44.886023       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:00:44.891927       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:00:44.892109       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:00:44.892139       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:00:44.896305       1 config.go:319] "Starting node config controller"
	I0731 21:00:44.897268       1 config.go:192] "Starting service config controller"
	I0731 21:00:44.897353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:00:44.897459       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:00:44.897480       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:00:44.899863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:00:44.998327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:00:44.998437       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:00:44.999988       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815] <==
	E0731 20:53:54.335728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:53:55.186081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 20:53:55.186126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 20:53:55.223467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 20:53:55.223544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 20:53:55.415933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 20:53:55.416035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:53:55.543049       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:53:55.543212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 20:53:55.560413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:53:55.560648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:53:55.593038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:53:55.593084       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 20:53:55.602731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 20:53:55.602821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:53:55.635885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 20:53:55.635959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 20:53:55.647796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 20:53:55.647922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 20:53:55.662202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 20:53:55.662286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 20:53:55.799658       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 20:53:55.799819       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 20:53:57.618995       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 20:59:04.779654       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea] <==
	I0731 21:00:41.790400       1 serving.go:380] Generated self-signed cert in-memory
	W0731 21:00:43.868265       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:00:43.868307       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:00:43.868317       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:00:43.868324       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:00:43.911223       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:00:43.911258       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:00:43.912705       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:00:43.916924       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:00:43.916958       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:00:43.916977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:00:44.017704       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:00:41 multinode-220043 kubelet[3082]: E0731 21:00:41.145444    3082 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.184:8443: connect: connection refused
	Jul 31 21:00:41 multinode-220043 kubelet[3082]: I0731 21:00:41.629579    3082 kubelet_node_status.go:73] "Attempting to register node" node="multinode-220043"
	Jul 31 21:00:43 multinode-220043 kubelet[3082]: I0731 21:00:43.972911    3082 kubelet_node_status.go:112] "Node was previously registered" node="multinode-220043"
	Jul 31 21:00:43 multinode-220043 kubelet[3082]: I0731 21:00:43.973009    3082 kubelet_node_status.go:76] "Successfully registered node" node="multinode-220043"
	Jul 31 21:00:43 multinode-220043 kubelet[3082]: I0731 21:00:43.974493    3082 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 21:00:43 multinode-220043 kubelet[3082]: I0731 21:00:43.975994    3082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.083800    3082 apiserver.go:52] "Watching apiserver"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.086302    3082 topology_manager.go:215] "Topology Admit Handler" podUID="d4a24288-5134-4044-9ca6-a310ea329b72" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nl9gd"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.086419    3082 topology_manager.go:215] "Topology Admit Handler" podUID="096976bc-3005-4c8d-88a7-da32abefc439" podNamespace="kube-system" podName="kindnet-dnshn"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.086477    3082 topology_manager.go:215] "Topology Admit Handler" podUID="74dfe9b6-475f-4ff0-899a-24eaccd3540f" podNamespace="kube-system" podName="kube-proxy-fk7mt"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.086590    3082 topology_manager.go:215] "Topology Admit Handler" podUID="1cf5142c-160e-4228-81ec-e47c2de8701d" podNamespace="kube-system" podName="storage-provisioner"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.086652    3082 topology_manager.go:215] "Topology Admit Handler" podUID="d932eb77-1509-4fc7-a3ab-7315556707b0" podNamespace="default" podName="busybox-fc5497c4f-6q6qp"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.102032    3082 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.162791    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/096976bc-3005-4c8d-88a7-da32abefc439-xtables-lock\") pod \"kindnet-dnshn\" (UID: \"096976bc-3005-4c8d-88a7-da32abefc439\") " pod="kube-system/kindnet-dnshn"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.162964    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1cf5142c-160e-4228-81ec-e47c2de8701d-tmp\") pod \"storage-provisioner\" (UID: \"1cf5142c-160e-4228-81ec-e47c2de8701d\") " pod="kube-system/storage-provisioner"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.163000    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74dfe9b6-475f-4ff0-899a-24eaccd3540f-lib-modules\") pod \"kube-proxy-fk7mt\" (UID: \"74dfe9b6-475f-4ff0-899a-24eaccd3540f\") " pod="kube-system/kube-proxy-fk7mt"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.163190    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/096976bc-3005-4c8d-88a7-da32abefc439-lib-modules\") pod \"kindnet-dnshn\" (UID: \"096976bc-3005-4c8d-88a7-da32abefc439\") " pod="kube-system/kindnet-dnshn"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.163215    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74dfe9b6-475f-4ff0-899a-24eaccd3540f-xtables-lock\") pod \"kube-proxy-fk7mt\" (UID: \"74dfe9b6-475f-4ff0-899a-24eaccd3540f\") " pod="kube-system/kube-proxy-fk7mt"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.163230    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/096976bc-3005-4c8d-88a7-da32abefc439-cni-cfg\") pod \"kindnet-dnshn\" (UID: \"096976bc-3005-4c8d-88a7-da32abefc439\") " pod="kube-system/kindnet-dnshn"
	Jul 31 21:00:49 multinode-220043 kubelet[3082]: I0731 21:00:49.526461    3082 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 21:01:40 multinode-220043 kubelet[3082]: E0731 21:01:40.186675    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:01:40 multinode-220043 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:01:40 multinode-220043 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:01:40 multinode-220043 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:01:40 multinode-220043 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:02:24.710026 1131141 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19360-1093692/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-220043 -n multinode-220043
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-220043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 stop
E0731 21:02:34.403232 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-220043 stop: exit status 82 (2m0.482997662s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-220043-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-220043 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status
E0731 21:04:31.357599 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-220043 status: exit status 3 (18.783951082s)

                                                
                                                
-- stdout --
	multinode-220043
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-220043-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:04:48.064546 1131807 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host
	E0731 21:04:48.064589 1131807 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-220043 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-220043 -n multinode-220043
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-220043 logs -n 25: (1.382326242s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m02:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043:/home/docker/cp-test_multinode-220043-m02_multinode-220043.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043 sudo cat                                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m02_multinode-220043.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m02:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03:/home/docker/cp-test_multinode-220043-m02_multinode-220043-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043-m03 sudo cat                                   | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m02_multinode-220043-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp testdata/cp-test.txt                                                | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3543853040/001/cp-test_multinode-220043-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043:/home/docker/cp-test_multinode-220043-m03_multinode-220043.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043 sudo cat                                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m03_multinode-220043.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt                       | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02:/home/docker/cp-test_multinode-220043-m03_multinode-220043-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043-m02 sudo cat                                   | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m03_multinode-220043-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-220043 node stop m03                                                          | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	| node    | multinode-220043 node start                                                             | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-220043                                                                | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:57 UTC |                     |
	| stop    | -p multinode-220043                                                                     | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:57 UTC |                     |
	| start   | -p multinode-220043                                                                     | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 20:59 UTC | 31 Jul 24 21:02 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-220043                                                                | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 21:02 UTC |                     |
	| node    | multinode-220043 node delete                                                            | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 21:02 UTC | 31 Jul 24 21:02 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-220043 stop                                                                   | multinode-220043 | jenkins | v1.33.1 | 31 Jul 24 21:02 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:59:03
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:59:03.605333 1130033 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:59:03.605452 1130033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:59:03.605460 1130033 out.go:304] Setting ErrFile to fd 2...
	I0731 20:59:03.605464 1130033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:59:03.605646 1130033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:59:03.606197 1130033 out.go:298] Setting JSON to false
	I0731 20:59:03.607256 1130033 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":16895,"bootTime":1722442649,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:59:03.607322 1130033 start.go:139] virtualization: kvm guest
	I0731 20:59:03.609371 1130033 out.go:177] * [multinode-220043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:59:03.610691 1130033 notify.go:220] Checking for updates...
	I0731 20:59:03.610701 1130033 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:59:03.612401 1130033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:59:03.614132 1130033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:59:03.615311 1130033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:59:03.616528 1130033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:59:03.617759 1130033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:59:03.619830 1130033 config.go:182] Loaded profile config "multinode-220043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:03.619986 1130033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:59:03.620678 1130033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:59:03.620774 1130033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:03.636949 1130033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0731 20:59:03.637453 1130033 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:03.638082 1130033 main.go:141] libmachine: Using API Version  1
	I0731 20:59:03.638105 1130033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:03.638474 1130033 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:03.638675 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 20:59:03.675565 1130033 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:59:03.676772 1130033 start.go:297] selected driver: kvm2
	I0731 20:59:03.676784 1130033 start.go:901] validating driver "kvm2" against &{Name:multinode-220043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-220043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.66 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:03.676939 1130033 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:59:03.677378 1130033 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:59:03.677461 1130033 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:59:03.693539 1130033 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:59:03.694297 1130033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:59:03.694341 1130033 cni.go:84] Creating CNI manager for ""
	I0731 20:59:03.694349 1130033 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 20:59:03.694405 1130033 start.go:340] cluster config:
	{Name:multinode-220043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-220043 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.66 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:03.694532 1130033 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:59:03.696632 1130033 out.go:177] * Starting "multinode-220043" primary control-plane node in "multinode-220043" cluster
	I0731 20:59:03.697801 1130033 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:03.697848 1130033 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:59:03.697862 1130033 cache.go:56] Caching tarball of preloaded images
	I0731 20:59:03.697977 1130033 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:59:03.697990 1130033 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:59:03.698190 1130033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/config.json ...
	I0731 20:59:03.698461 1130033 start.go:360] acquireMachinesLock for multinode-220043: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:59:03.698519 1130033 start.go:364] duration metric: took 32.098µs to acquireMachinesLock for "multinode-220043"
	I0731 20:59:03.698555 1130033 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:03.698565 1130033 fix.go:54] fixHost starting: 
	I0731 20:59:03.698847 1130033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:59:03.698886 1130033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:03.714120 1130033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I0731 20:59:03.714587 1130033 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:03.715039 1130033 main.go:141] libmachine: Using API Version  1
	I0731 20:59:03.715065 1130033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:03.715373 1130033 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:03.715561 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 20:59:03.715759 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetState
	I0731 20:59:03.717313 1130033 fix.go:112] recreateIfNeeded on multinode-220043: state=Running err=<nil>
	W0731 20:59:03.717344 1130033 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:03.719137 1130033 out.go:177] * Updating the running kvm2 "multinode-220043" VM ...
	I0731 20:59:03.720311 1130033 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:03.720331 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 20:59:03.720561 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:03.723125 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.723524 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:03.723563 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.723667 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:03.723885 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.724048 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.724196 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:03.724372 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:03.724601 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 20:59:03.724615 1130033 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:03.832414 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-220043
	
	I0731 20:59:03.832448 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetMachineName
	I0731 20:59:03.832748 1130033 buildroot.go:166] provisioning hostname "multinode-220043"
	I0731 20:59:03.832785 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetMachineName
	I0731 20:59:03.833023 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:03.835701 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.836215 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:03.836261 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.836453 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:03.836675 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.836813 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.836953 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:03.837100 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:03.837311 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 20:59:03.837334 1130033 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-220043 && echo "multinode-220043" | sudo tee /etc/hostname
	I0731 20:59:03.962280 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-220043
	
	I0731 20:59:03.962318 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:03.965130 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.965457 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:03.965490 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:03.965725 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:03.965934 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.966154 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:03.966311 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:03.966495 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:03.966667 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 20:59:03.966683 1130033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-220043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-220043/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-220043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:04.077214 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:04.077246 1130033 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 20:59:04.077271 1130033 buildroot.go:174] setting up certificates
	I0731 20:59:04.077280 1130033 provision.go:84] configureAuth start
	I0731 20:59:04.077288 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetMachineName
	I0731 20:59:04.077592 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetIP
	I0731 20:59:04.080282 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.080648 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:04.080675 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.080853 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:04.083024 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.083347 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:04.083379 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.083509 1130033 provision.go:143] copyHostCerts
	I0731 20:59:04.083542 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:59:04.083582 1130033 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 20:59:04.083596 1130033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 20:59:04.083664 1130033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 20:59:04.083751 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:59:04.083770 1130033 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 20:59:04.083777 1130033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 20:59:04.083800 1130033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 20:59:04.083844 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:59:04.083861 1130033 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 20:59:04.083867 1130033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 20:59:04.083888 1130033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 20:59:04.083945 1130033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.multinode-220043 san=[127.0.0.1 192.168.39.184 localhost minikube multinode-220043]
	I0731 20:59:04.470974 1130033 provision.go:177] copyRemoteCerts
	I0731 20:59:04.471039 1130033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:04.471065 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:04.473767 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.474112 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:04.474152 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.474302 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:04.474501 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:04.474688 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:04.474838 1130033 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 20:59:04.560339 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:59:04.560419 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:04.587860 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:59:04.587926 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 20:59:04.611067 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:59:04.611153 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:04.645540 1130033 provision.go:87] duration metric: took 568.246433ms to configureAuth
	I0731 20:59:04.645572 1130033 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:04.645860 1130033 config.go:182] Loaded profile config "multinode-220043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:04.645958 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:59:04.648666 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.649023 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:59:04.649051 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:59:04.649282 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:59:04.649480 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:04.649677 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:59:04.649807 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:59:04.649973 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:04.650136 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 20:59:04.650153 1130033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:00:35.350542 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:00:35.350589 1130033 machine.go:97] duration metric: took 1m31.630262442s to provisionDockerMachine
	I0731 21:00:35.350607 1130033 start.go:293] postStartSetup for "multinode-220043" (driver="kvm2")
	I0731 21:00:35.350621 1130033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:00:35.350646 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.351032 1130033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:00:35.351067 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 21:00:35.354422 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.354932 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.354963 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.355150 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 21:00:35.355385 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.355577 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 21:00:35.355725 1130033 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 21:00:35.438303 1130033 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:00:35.443199 1130033 command_runner.go:130] > NAME=Buildroot
	I0731 21:00:35.443222 1130033 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 21:00:35.443226 1130033 command_runner.go:130] > ID=buildroot
	I0731 21:00:35.443236 1130033 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 21:00:35.443244 1130033 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 21:00:35.443424 1130033 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:00:35.443451 1130033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:00:35.443522 1130033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:00:35.443627 1130033 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:00:35.443642 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /etc/ssl/certs/11009762.pem
	I0731 21:00:35.443757 1130033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:00:35.453311 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:00:35.477218 1130033 start.go:296] duration metric: took 126.592093ms for postStartSetup
	I0731 21:00:35.477290 1130033 fix.go:56] duration metric: took 1m31.778724399s for fixHost
	I0731 21:00:35.477322 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 21:00:35.480346 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.480768 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.480803 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.480992 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 21:00:35.481237 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.481428 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.481609 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 21:00:35.481777 1130033 main.go:141] libmachine: Using SSH client type: native
	I0731 21:00:35.481992 1130033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0731 21:00:35.482005 1130033 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:00:35.588955 1130033 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459635.575657523
	
	I0731 21:00:35.588977 1130033 fix.go:216] guest clock: 1722459635.575657523
	I0731 21:00:35.588985 1130033 fix.go:229] Guest: 2024-07-31 21:00:35.575657523 +0000 UTC Remote: 2024-07-31 21:00:35.477296456 +0000 UTC m=+91.911357802 (delta=98.361067ms)
	I0731 21:00:35.589006 1130033 fix.go:200] guest clock delta is within tolerance: 98.361067ms
	I0731 21:00:35.589012 1130033 start.go:83] releasing machines lock for "multinode-220043", held for 1m31.89046585s
	I0731 21:00:35.589034 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.589310 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetIP
	I0731 21:00:35.592082 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.592427 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.592453 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.592636 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.593209 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.593388 1130033 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 21:00:35.593479 1130033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:00:35.593527 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 21:00:35.593589 1130033 ssh_runner.go:195] Run: cat /version.json
	I0731 21:00:35.593629 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 21:00:35.596393 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.596420 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.596826 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.596860 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:35.596889 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.596908 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:35.597050 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 21:00:35.597161 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 21:00:35.597261 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.597345 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 21:00:35.597460 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 21:00:35.597529 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 21:00:35.597613 1130033 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 21:00:35.597695 1130033 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 21:00:35.676701 1130033 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 21:00:35.695004 1130033 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 21:00:35.695062 1130033 ssh_runner.go:195] Run: systemctl --version
	I0731 21:00:35.700669 1130033 command_runner.go:130] > systemd 252 (252)
	I0731 21:00:35.700721 1130033 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 21:00:35.700898 1130033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:00:35.850847 1130033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 21:00:35.856762 1130033 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 21:00:35.856947 1130033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:00:35.857022 1130033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:00:35.866402 1130033 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 21:00:35.866432 1130033 start.go:495] detecting cgroup driver to use...
	I0731 21:00:35.866511 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:00:35.884299 1130033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:00:35.898903 1130033 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:00:35.898977 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:00:35.913074 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:00:35.927057 1130033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:00:36.078703 1130033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:00:36.227129 1130033 docker.go:233] disabling docker service ...
	I0731 21:00:36.227218 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:00:36.248140 1130033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:00:36.262929 1130033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:00:36.433507 1130033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:00:36.599208 1130033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:00:36.613185 1130033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:00:36.631581 1130033 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 21:00:36.631781 1130033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:00:36.631848 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.642609 1130033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:00:36.642691 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.654016 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.664773 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.675338 1130033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:00:36.686804 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.697317 1130033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.708198 1130033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:00:36.718713 1130033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:00:36.728143 1130033 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 21:00:36.728255 1130033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:00:36.737740 1130033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:36.876157 1130033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:00:37.691313 1130033 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:00:37.691388 1130033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:00:37.696344 1130033 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 21:00:37.696372 1130033 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 21:00:37.696379 1130033 command_runner.go:130] > Device: 0,22	Inode: 1359        Links: 1
	I0731 21:00:37.696386 1130033 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 21:00:37.696391 1130033 command_runner.go:130] > Access: 2024-07-31 21:00:37.565882918 +0000
	I0731 21:00:37.696396 1130033 command_runner.go:130] > Modify: 2024-07-31 21:00:37.565882918 +0000
	I0731 21:00:37.696400 1130033 command_runner.go:130] > Change: 2024-07-31 21:00:37.565882918 +0000
	I0731 21:00:37.696404 1130033 command_runner.go:130] >  Birth: -
	I0731 21:00:37.696424 1130033 start.go:563] Will wait 60s for crictl version
	I0731 21:00:37.696480 1130033 ssh_runner.go:195] Run: which crictl
	I0731 21:00:37.700183 1130033 command_runner.go:130] > /usr/bin/crictl
	I0731 21:00:37.700265 1130033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:00:37.740221 1130033 command_runner.go:130] > Version:  0.1.0
	I0731 21:00:37.740249 1130033 command_runner.go:130] > RuntimeName:  cri-o
	I0731 21:00:37.740254 1130033 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 21:00:37.740260 1130033 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 21:00:37.741390 1130033 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:00:37.741475 1130033 ssh_runner.go:195] Run: crio --version
	I0731 21:00:37.769274 1130033 command_runner.go:130] > crio version 1.29.1
	I0731 21:00:37.769309 1130033 command_runner.go:130] > Version:        1.29.1
	I0731 21:00:37.769320 1130033 command_runner.go:130] > GitCommit:      unknown
	I0731 21:00:37.769325 1130033 command_runner.go:130] > GitCommitDate:  unknown
	I0731 21:00:37.769331 1130033 command_runner.go:130] > GitTreeState:   clean
	I0731 21:00:37.769339 1130033 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 21:00:37.769345 1130033 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 21:00:37.769350 1130033 command_runner.go:130] > Compiler:       gc
	I0731 21:00:37.769365 1130033 command_runner.go:130] > Platform:       linux/amd64
	I0731 21:00:37.769376 1130033 command_runner.go:130] > Linkmode:       dynamic
	I0731 21:00:37.769385 1130033 command_runner.go:130] > BuildTags:      
	I0731 21:00:37.769393 1130033 command_runner.go:130] >   containers_image_ostree_stub
	I0731 21:00:37.769400 1130033 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 21:00:37.769407 1130033 command_runner.go:130] >   btrfs_noversion
	I0731 21:00:37.769415 1130033 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 21:00:37.769422 1130033 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 21:00:37.769428 1130033 command_runner.go:130] >   seccomp
	I0731 21:00:37.769435 1130033 command_runner.go:130] > LDFlags:          unknown
	I0731 21:00:37.769443 1130033 command_runner.go:130] > SeccompEnabled:   true
	I0731 21:00:37.769450 1130033 command_runner.go:130] > AppArmorEnabled:  false
	I0731 21:00:37.769532 1130033 ssh_runner.go:195] Run: crio --version
	I0731 21:00:37.796723 1130033 command_runner.go:130] > crio version 1.29.1
	I0731 21:00:37.796750 1130033 command_runner.go:130] > Version:        1.29.1
	I0731 21:00:37.796758 1130033 command_runner.go:130] > GitCommit:      unknown
	I0731 21:00:37.796765 1130033 command_runner.go:130] > GitCommitDate:  unknown
	I0731 21:00:37.796771 1130033 command_runner.go:130] > GitTreeState:   clean
	I0731 21:00:37.796780 1130033 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 21:00:37.796785 1130033 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 21:00:37.796790 1130033 command_runner.go:130] > Compiler:       gc
	I0731 21:00:37.796796 1130033 command_runner.go:130] > Platform:       linux/amd64
	I0731 21:00:37.796803 1130033 command_runner.go:130] > Linkmode:       dynamic
	I0731 21:00:37.796823 1130033 command_runner.go:130] > BuildTags:      
	I0731 21:00:37.796833 1130033 command_runner.go:130] >   containers_image_ostree_stub
	I0731 21:00:37.796845 1130033 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 21:00:37.796851 1130033 command_runner.go:130] >   btrfs_noversion
	I0731 21:00:37.796869 1130033 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 21:00:37.796879 1130033 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 21:00:37.796886 1130033 command_runner.go:130] >   seccomp
	I0731 21:00:37.796893 1130033 command_runner.go:130] > LDFlags:          unknown
	I0731 21:00:37.796901 1130033 command_runner.go:130] > SeccompEnabled:   true
	I0731 21:00:37.796914 1130033 command_runner.go:130] > AppArmorEnabled:  false
	I0731 21:00:37.800869 1130033 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:00:37.802070 1130033 main.go:141] libmachine: (multinode-220043) Calling .GetIP
	I0731 21:00:37.805152 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:37.805556 1130033 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 21:00:37.805591 1130033 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 21:00:37.805850 1130033 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:00:37.810072 1130033 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 21:00:37.810222 1130033 kubeadm.go:883] updating cluster {Name:multinode-220043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-220043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.66 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:00:37.810374 1130033 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:00:37.810421 1130033 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:00:37.855203 1130033 command_runner.go:130] > {
	I0731 21:00:37.855230 1130033 command_runner.go:130] >   "images": [
	I0731 21:00:37.855234 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855242 1130033 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 21:00:37.855247 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855253 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 21:00:37.855261 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855264 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855272 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 21:00:37.855279 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 21:00:37.855282 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855287 1130033 command_runner.go:130] >       "size": "87165492",
	I0731 21:00:37.855291 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855295 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855302 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855306 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855309 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855313 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855319 1130033 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 21:00:37.855326 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855331 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 21:00:37.855337 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855341 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855347 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 21:00:37.855357 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 21:00:37.855364 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855368 1130033 command_runner.go:130] >       "size": "87174707",
	I0731 21:00:37.855371 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855382 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855389 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855394 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855402 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855407 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855419 1130033 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 21:00:37.855429 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855436 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 21:00:37.855442 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855446 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855454 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 21:00:37.855463 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 21:00:37.855467 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855472 1130033 command_runner.go:130] >       "size": "1363676",
	I0731 21:00:37.855478 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855482 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855489 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855496 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855501 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855506 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855519 1130033 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 21:00:37.855528 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855536 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 21:00:37.855539 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855546 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855553 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 21:00:37.855575 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 21:00:37.855584 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855591 1130033 command_runner.go:130] >       "size": "31470524",
	I0731 21:00:37.855597 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855607 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855616 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855626 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855632 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855636 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855644 1130033 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 21:00:37.855651 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855656 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 21:00:37.855696 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855740 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855755 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 21:00:37.855766 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 21:00:37.855772 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855777 1130033 command_runner.go:130] >       "size": "61245718",
	I0731 21:00:37.855782 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.855788 1130033 command_runner.go:130] >       "username": "nonroot",
	I0731 21:00:37.855796 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855805 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855812 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855820 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855830 1130033 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 21:00:37.855839 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.855850 1130033 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 21:00:37.855857 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855867 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.855877 1130033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 21:00:37.855891 1130033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 21:00:37.855900 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.855911 1130033 command_runner.go:130] >       "size": "150779692",
	I0731 21:00:37.855920 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.855930 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.855938 1130033 command_runner.go:130] >       },
	I0731 21:00:37.855944 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.855953 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.855958 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.855964 1130033 command_runner.go:130] >     },
	I0731 21:00:37.855970 1130033 command_runner.go:130] >     {
	I0731 21:00:37.855984 1130033 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 21:00:37.855999 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856010 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 21:00:37.856021 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856031 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856044 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 21:00:37.856055 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 21:00:37.856064 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856075 1130033 command_runner.go:130] >       "size": "117609954",
	I0731 21:00:37.856084 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.856109 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.856115 1130033 command_runner.go:130] >       },
	I0731 21:00:37.856125 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856132 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856142 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.856148 1130033 command_runner.go:130] >     },
	I0731 21:00:37.856157 1130033 command_runner.go:130] >     {
	I0731 21:00:37.856169 1130033 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 21:00:37.856175 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856184 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 21:00:37.856194 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856200 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856225 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 21:00:37.856241 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 21:00:37.856250 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856259 1130033 command_runner.go:130] >       "size": "112198984",
	I0731 21:00:37.856268 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.856278 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.856285 1130033 command_runner.go:130] >       },
	I0731 21:00:37.856291 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856296 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856301 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.856304 1130033 command_runner.go:130] >     },
	I0731 21:00:37.856309 1130033 command_runner.go:130] >     {
	I0731 21:00:37.856318 1130033 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 21:00:37.856324 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856332 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 21:00:37.856339 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856346 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856358 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 21:00:37.856369 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 21:00:37.856375 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856381 1130033 command_runner.go:130] >       "size": "85953945",
	I0731 21:00:37.856387 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.856391 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856398 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856413 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.856422 1130033 command_runner.go:130] >     },
	I0731 21:00:37.856430 1130033 command_runner.go:130] >     {
	I0731 21:00:37.856441 1130033 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 21:00:37.856451 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856466 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 21:00:37.856474 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856482 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856496 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 21:00:37.856511 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 21:00:37.856520 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856527 1130033 command_runner.go:130] >       "size": "63051080",
	I0731 21:00:37.856536 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.856556 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.856564 1130033 command_runner.go:130] >       },
	I0731 21:00:37.856572 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856579 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856588 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.856598 1130033 command_runner.go:130] >     },
	I0731 21:00:37.856606 1130033 command_runner.go:130] >     {
	I0731 21:00:37.856615 1130033 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 21:00:37.856625 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.856636 1130033 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 21:00:37.856644 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856653 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.856663 1130033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 21:00:37.856678 1130033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 21:00:37.856690 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.856700 1130033 command_runner.go:130] >       "size": "750414",
	I0731 21:00:37.856709 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.856719 1130033 command_runner.go:130] >         "value": "65535"
	I0731 21:00:37.856728 1130033 command_runner.go:130] >       },
	I0731 21:00:37.856737 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.856745 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.856754 1130033 command_runner.go:130] >       "pinned": true
	I0731 21:00:37.856760 1130033 command_runner.go:130] >     }
	I0731 21:00:37.856766 1130033 command_runner.go:130] >   ]
	I0731 21:00:37.856770 1130033 command_runner.go:130] > }
	I0731 21:00:37.857056 1130033 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:00:37.857074 1130033 crio.go:433] Images already preloaded, skipping extraction
	I0731 21:00:37.857139 1130033 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:00:37.891943 1130033 command_runner.go:130] > {
	I0731 21:00:37.891974 1130033 command_runner.go:130] >   "images": [
	I0731 21:00:37.891980 1130033 command_runner.go:130] >     {
	I0731 21:00:37.891988 1130033 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 21:00:37.891993 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.891999 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 21:00:37.892004 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892008 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892017 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 21:00:37.892027 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 21:00:37.892032 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892043 1130033 command_runner.go:130] >       "size": "87165492",
	I0731 21:00:37.892051 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892059 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892070 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892078 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892082 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892097 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892110 1130033 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 21:00:37.892117 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892129 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 21:00:37.892138 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892145 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892157 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 21:00:37.892166 1130033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 21:00:37.892172 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892183 1130033 command_runner.go:130] >       "size": "87174707",
	I0731 21:00:37.892196 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892213 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892222 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892231 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892239 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892246 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892256 1130033 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 21:00:37.892265 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892277 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 21:00:37.892287 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892297 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892309 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 21:00:37.892323 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 21:00:37.892332 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892339 1130033 command_runner.go:130] >       "size": "1363676",
	I0731 21:00:37.892344 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892352 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892375 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892386 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892390 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892396 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892408 1130033 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 21:00:37.892418 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892424 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 21:00:37.892430 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892437 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892452 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 21:00:37.892472 1130033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 21:00:37.892481 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892492 1130033 command_runner.go:130] >       "size": "31470524",
	I0731 21:00:37.892502 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892510 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892514 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892523 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892531 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892540 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892552 1130033 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 21:00:37.892562 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892573 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 21:00:37.892592 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892600 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892620 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 21:00:37.892636 1130033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 21:00:37.892644 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892654 1130033 command_runner.go:130] >       "size": "61245718",
	I0731 21:00:37.892663 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.892672 1130033 command_runner.go:130] >       "username": "nonroot",
	I0731 21:00:37.892680 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892684 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892688 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892696 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892709 1130033 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 21:00:37.892719 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892729 1130033 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 21:00:37.892738 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892747 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892761 1130033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 21:00:37.892772 1130033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 21:00:37.892780 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892787 1130033 command_runner.go:130] >       "size": "150779692",
	I0731 21:00:37.892797 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.892806 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.892818 1130033 command_runner.go:130] >       },
	I0731 21:00:37.892828 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892837 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892846 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.892853 1130033 command_runner.go:130] >     },
	I0731 21:00:37.892857 1130033 command_runner.go:130] >     {
	I0731 21:00:37.892869 1130033 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 21:00:37.892879 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.892889 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 21:00:37.892898 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892908 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.892923 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 21:00:37.892938 1130033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 21:00:37.892945 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.892950 1130033 command_runner.go:130] >       "size": "117609954",
	I0731 21:00:37.892955 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.892963 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.892969 1130033 command_runner.go:130] >       },
	I0731 21:00:37.892977 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.892984 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.892993 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.893001 1130033 command_runner.go:130] >     },
	I0731 21:00:37.893007 1130033 command_runner.go:130] >     {
	I0731 21:00:37.893019 1130033 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 21:00:37.893028 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.893033 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 21:00:37.893041 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893048 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.893074 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 21:00:37.893092 1130033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 21:00:37.893097 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893104 1130033 command_runner.go:130] >       "size": "112198984",
	I0731 21:00:37.893110 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.893115 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.893118 1130033 command_runner.go:130] >       },
	I0731 21:00:37.893122 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.893127 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.893133 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.893139 1130033 command_runner.go:130] >     },
	I0731 21:00:37.893150 1130033 command_runner.go:130] >     {
	I0731 21:00:37.893164 1130033 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 21:00:37.893170 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.893181 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 21:00:37.893189 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893196 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.893207 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 21:00:37.893224 1130033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 21:00:37.893232 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893241 1130033 command_runner.go:130] >       "size": "85953945",
	I0731 21:00:37.893250 1130033 command_runner.go:130] >       "uid": null,
	I0731 21:00:37.893260 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.893269 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.893279 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.893287 1130033 command_runner.go:130] >     },
	I0731 21:00:37.893296 1130033 command_runner.go:130] >     {
	I0731 21:00:37.893304 1130033 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 21:00:37.893310 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.893319 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 21:00:37.893328 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893337 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.893352 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 21:00:37.893371 1130033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 21:00:37.893380 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893387 1130033 command_runner.go:130] >       "size": "63051080",
	I0731 21:00:37.893391 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.893400 1130033 command_runner.go:130] >         "value": "0"
	I0731 21:00:37.893406 1130033 command_runner.go:130] >       },
	I0731 21:00:37.893416 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.893426 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.893435 1130033 command_runner.go:130] >       "pinned": false
	I0731 21:00:37.893443 1130033 command_runner.go:130] >     },
	I0731 21:00:37.893449 1130033 command_runner.go:130] >     {
	I0731 21:00:37.893459 1130033 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 21:00:37.893468 1130033 command_runner.go:130] >       "repoTags": [
	I0731 21:00:37.893473 1130033 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 21:00:37.893481 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893487 1130033 command_runner.go:130] >       "repoDigests": [
	I0731 21:00:37.893501 1130033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 21:00:37.893521 1130033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 21:00:37.893526 1130033 command_runner.go:130] >       ],
	I0731 21:00:37.893532 1130033 command_runner.go:130] >       "size": "750414",
	I0731 21:00:37.893541 1130033 command_runner.go:130] >       "uid": {
	I0731 21:00:37.893550 1130033 command_runner.go:130] >         "value": "65535"
	I0731 21:00:37.893561 1130033 command_runner.go:130] >       },
	I0731 21:00:37.893568 1130033 command_runner.go:130] >       "username": "",
	I0731 21:00:37.893577 1130033 command_runner.go:130] >       "spec": null,
	I0731 21:00:37.893584 1130033 command_runner.go:130] >       "pinned": true
	I0731 21:00:37.893593 1130033 command_runner.go:130] >     }
	I0731 21:00:37.893598 1130033 command_runner.go:130] >   ]
	I0731 21:00:37.893606 1130033 command_runner.go:130] > }
	I0731 21:00:37.893786 1130033 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:00:37.893804 1130033 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:00:37.893814 1130033 kubeadm.go:934] updating node { 192.168.39.184 8443 v1.30.3 crio true true} ...
	I0731 21:00:37.893988 1130033 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-220043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-220043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:00:37.894079 1130033 ssh_runner.go:195] Run: crio config
	I0731 21:00:37.927262 1130033 command_runner.go:130] ! time="2024-07-31 21:00:37.914158409Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 21:00:37.932848 1130033 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 21:00:37.939049 1130033 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 21:00:37.939088 1130033 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 21:00:37.939098 1130033 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 21:00:37.939102 1130033 command_runner.go:130] > #
	I0731 21:00:37.939109 1130033 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 21:00:37.939115 1130033 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 21:00:37.939121 1130033 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 21:00:37.939127 1130033 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 21:00:37.939131 1130033 command_runner.go:130] > # reload'.
	I0731 21:00:37.939137 1130033 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 21:00:37.939143 1130033 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 21:00:37.939151 1130033 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 21:00:37.939164 1130033 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 21:00:37.939178 1130033 command_runner.go:130] > [crio]
	I0731 21:00:37.939190 1130033 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 21:00:37.939199 1130033 command_runner.go:130] > # containers images, in this directory.
	I0731 21:00:37.939203 1130033 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 21:00:37.939212 1130033 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 21:00:37.939217 1130033 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 21:00:37.939224 1130033 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 21:00:37.939230 1130033 command_runner.go:130] > # imagestore = ""
	I0731 21:00:37.939237 1130033 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 21:00:37.939249 1130033 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 21:00:37.939261 1130033 command_runner.go:130] > storage_driver = "overlay"
	I0731 21:00:37.939273 1130033 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 21:00:37.939284 1130033 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 21:00:37.939304 1130033 command_runner.go:130] > storage_option = [
	I0731 21:00:37.939312 1130033 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 21:00:37.939315 1130033 command_runner.go:130] > ]
	I0731 21:00:37.939321 1130033 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 21:00:37.939330 1130033 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 21:00:37.939341 1130033 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 21:00:37.939350 1130033 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 21:00:37.939363 1130033 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 21:00:37.939373 1130033 command_runner.go:130] > # always happen on a node reboot
	I0731 21:00:37.939384 1130033 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 21:00:37.939400 1130033 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 21:00:37.939409 1130033 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 21:00:37.939419 1130033 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 21:00:37.939431 1130033 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 21:00:37.939445 1130033 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 21:00:37.939461 1130033 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 21:00:37.939470 1130033 command_runner.go:130] > # internal_wipe = true
	I0731 21:00:37.939481 1130033 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 21:00:37.939490 1130033 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 21:00:37.939496 1130033 command_runner.go:130] > # internal_repair = false
	I0731 21:00:37.939520 1130033 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 21:00:37.939532 1130033 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 21:00:37.939544 1130033 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 21:00:37.939557 1130033 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 21:00:37.939569 1130033 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 21:00:37.939575 1130033 command_runner.go:130] > [crio.api]
	I0731 21:00:37.939582 1130033 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 21:00:37.939592 1130033 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 21:00:37.939607 1130033 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 21:00:37.939616 1130033 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 21:00:37.939630 1130033 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 21:00:37.939641 1130033 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 21:00:37.939650 1130033 command_runner.go:130] > # stream_port = "0"
	I0731 21:00:37.939659 1130033 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 21:00:37.939666 1130033 command_runner.go:130] > # stream_enable_tls = false
	I0731 21:00:37.939675 1130033 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 21:00:37.939685 1130033 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 21:00:37.939699 1130033 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 21:00:37.939712 1130033 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 21:00:37.939721 1130033 command_runner.go:130] > # minutes.
	I0731 21:00:37.939730 1130033 command_runner.go:130] > # stream_tls_cert = ""
	I0731 21:00:37.939741 1130033 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 21:00:37.939750 1130033 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 21:00:37.939760 1130033 command_runner.go:130] > # stream_tls_key = ""
	I0731 21:00:37.939776 1130033 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 21:00:37.939789 1130033 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 21:00:37.939813 1130033 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 21:00:37.939822 1130033 command_runner.go:130] > # stream_tls_ca = ""
	I0731 21:00:37.939833 1130033 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 21:00:37.939841 1130033 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 21:00:37.939855 1130033 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 21:00:37.939867 1130033 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 21:00:37.939882 1130033 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 21:00:37.939895 1130033 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 21:00:37.939904 1130033 command_runner.go:130] > [crio.runtime]
	I0731 21:00:37.939916 1130033 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 21:00:37.939925 1130033 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 21:00:37.939935 1130033 command_runner.go:130] > # "nofile=1024:2048"
	I0731 21:00:37.939948 1130033 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 21:00:37.939959 1130033 command_runner.go:130] > # default_ulimits = [
	I0731 21:00:37.939967 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.939980 1130033 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 21:00:37.939990 1130033 command_runner.go:130] > # no_pivot = false
	I0731 21:00:37.940001 1130033 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 21:00:37.940010 1130033 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 21:00:37.940018 1130033 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 21:00:37.940031 1130033 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 21:00:37.940042 1130033 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 21:00:37.940055 1130033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 21:00:37.940065 1130033 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 21:00:37.940075 1130033 command_runner.go:130] > # Cgroup setting for conmon
	I0731 21:00:37.940087 1130033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 21:00:37.940108 1130033 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 21:00:37.940118 1130033 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 21:00:37.940130 1130033 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 21:00:37.940148 1130033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 21:00:37.940157 1130033 command_runner.go:130] > conmon_env = [
	I0731 21:00:37.940167 1130033 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 21:00:37.940172 1130033 command_runner.go:130] > ]
	I0731 21:00:37.940181 1130033 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 21:00:37.940192 1130033 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 21:00:37.940204 1130033 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 21:00:37.940213 1130033 command_runner.go:130] > # default_env = [
	I0731 21:00:37.940222 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.940234 1130033 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 21:00:37.940249 1130033 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 21:00:37.940255 1130033 command_runner.go:130] > # selinux = false
	I0731 21:00:37.940263 1130033 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 21:00:37.940277 1130033 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 21:00:37.940289 1130033 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 21:00:37.940299 1130033 command_runner.go:130] > # seccomp_profile = ""
	I0731 21:00:37.940310 1130033 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 21:00:37.940322 1130033 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 21:00:37.940335 1130033 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 21:00:37.940342 1130033 command_runner.go:130] > # which might increase security.
	I0731 21:00:37.940350 1130033 command_runner.go:130] > # This option is currently deprecated,
	I0731 21:00:37.940362 1130033 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 21:00:37.940373 1130033 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 21:00:37.940384 1130033 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 21:00:37.940396 1130033 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 21:00:37.940409 1130033 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 21:00:37.940421 1130033 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 21:00:37.940428 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.940434 1130033 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 21:00:37.940446 1130033 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 21:00:37.940457 1130033 command_runner.go:130] > # the cgroup blockio controller.
	I0731 21:00:37.940463 1130033 command_runner.go:130] > # blockio_config_file = ""
	I0731 21:00:37.940476 1130033 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 21:00:37.940483 1130033 command_runner.go:130] > # blockio parameters.
	I0731 21:00:37.940490 1130033 command_runner.go:130] > # blockio_reload = false
	I0731 21:00:37.940507 1130033 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 21:00:37.940514 1130033 command_runner.go:130] > # irqbalance daemon.
	I0731 21:00:37.940521 1130033 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 21:00:37.940538 1130033 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 21:00:37.940551 1130033 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 21:00:37.940564 1130033 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 21:00:37.940577 1130033 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 21:00:37.940590 1130033 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 21:00:37.940598 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.940602 1130033 command_runner.go:130] > # rdt_config_file = ""
	I0731 21:00:37.940613 1130033 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 21:00:37.940623 1130033 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 21:00:37.940654 1130033 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 21:00:37.940664 1130033 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 21:00:37.940674 1130033 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 21:00:37.940681 1130033 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 21:00:37.940687 1130033 command_runner.go:130] > # will be added.
	I0731 21:00:37.940693 1130033 command_runner.go:130] > # default_capabilities = [
	I0731 21:00:37.940702 1130033 command_runner.go:130] > # 	"CHOWN",
	I0731 21:00:37.940712 1130033 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 21:00:37.940722 1130033 command_runner.go:130] > # 	"FSETID",
	I0731 21:00:37.940732 1130033 command_runner.go:130] > # 	"FOWNER",
	I0731 21:00:37.940741 1130033 command_runner.go:130] > # 	"SETGID",
	I0731 21:00:37.940750 1130033 command_runner.go:130] > # 	"SETUID",
	I0731 21:00:37.940756 1130033 command_runner.go:130] > # 	"SETPCAP",
	I0731 21:00:37.940764 1130033 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 21:00:37.940767 1130033 command_runner.go:130] > # 	"KILL",
	I0731 21:00:37.940770 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.940784 1130033 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 21:00:37.940798 1130033 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 21:00:37.940809 1130033 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 21:00:37.940822 1130033 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 21:00:37.940834 1130033 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 21:00:37.940843 1130033 command_runner.go:130] > default_sysctls = [
	I0731 21:00:37.940852 1130033 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 21:00:37.940857 1130033 command_runner.go:130] > ]
	I0731 21:00:37.940864 1130033 command_runner.go:130] > # List of devices on the host that a
	I0731 21:00:37.940876 1130033 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 21:00:37.940886 1130033 command_runner.go:130] > # allowed_devices = [
	I0731 21:00:37.940895 1130033 command_runner.go:130] > # 	"/dev/fuse",
	I0731 21:00:37.940903 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.940910 1130033 command_runner.go:130] > # List of additional devices. specified as
	I0731 21:00:37.940926 1130033 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 21:00:37.940936 1130033 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 21:00:37.940950 1130033 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 21:00:37.940960 1130033 command_runner.go:130] > # additional_devices = [
	I0731 21:00:37.940968 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.940980 1130033 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 21:00:37.940989 1130033 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 21:00:37.940995 1130033 command_runner.go:130] > # 	"/etc/cdi",
	I0731 21:00:37.941005 1130033 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 21:00:37.941011 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.941021 1130033 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 21:00:37.941030 1130033 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 21:00:37.941038 1130033 command_runner.go:130] > # Defaults to false.
	I0731 21:00:37.941050 1130033 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 21:00:37.941063 1130033 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 21:00:37.941076 1130033 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 21:00:37.941084 1130033 command_runner.go:130] > # hooks_dir = [
	I0731 21:00:37.941095 1130033 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 21:00:37.941102 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.941109 1130033 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 21:00:37.941121 1130033 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 21:00:37.941131 1130033 command_runner.go:130] > # its default mounts from the following two files:
	I0731 21:00:37.941139 1130033 command_runner.go:130] > #
	I0731 21:00:37.941152 1130033 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 21:00:37.941167 1130033 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 21:00:37.941178 1130033 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 21:00:37.941187 1130033 command_runner.go:130] > #
	I0731 21:00:37.941196 1130033 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 21:00:37.941211 1130033 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 21:00:37.941225 1130033 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 21:00:37.941237 1130033 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 21:00:37.941244 1130033 command_runner.go:130] > #
	I0731 21:00:37.941251 1130033 command_runner.go:130] > # default_mounts_file = ""
	I0731 21:00:37.941263 1130033 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 21:00:37.941276 1130033 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 21:00:37.941282 1130033 command_runner.go:130] > pids_limit = 1024
	I0731 21:00:37.941291 1130033 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 21:00:37.941304 1130033 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 21:00:37.941317 1130033 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 21:00:37.941332 1130033 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 21:00:37.941341 1130033 command_runner.go:130] > # log_size_max = -1
	I0731 21:00:37.941355 1130033 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 21:00:37.941366 1130033 command_runner.go:130] > # log_to_journald = false
	I0731 21:00:37.941378 1130033 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 21:00:37.941389 1130033 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 21:00:37.941401 1130033 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 21:00:37.941412 1130033 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 21:00:37.941424 1130033 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 21:00:37.941433 1130033 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 21:00:37.941444 1130033 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 21:00:37.941451 1130033 command_runner.go:130] > # read_only = false
	I0731 21:00:37.941459 1130033 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 21:00:37.941473 1130033 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 21:00:37.941483 1130033 command_runner.go:130] > # live configuration reload.
	I0731 21:00:37.941490 1130033 command_runner.go:130] > # log_level = "info"
	I0731 21:00:37.941505 1130033 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 21:00:37.941516 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.941526 1130033 command_runner.go:130] > # log_filter = ""
	I0731 21:00:37.941534 1130033 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 21:00:37.941546 1130033 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 21:00:37.941556 1130033 command_runner.go:130] > # separated by comma.
	I0731 21:00:37.941571 1130033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 21:00:37.941579 1130033 command_runner.go:130] > # uid_mappings = ""
	I0731 21:00:37.941592 1130033 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 21:00:37.941604 1130033 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 21:00:37.941613 1130033 command_runner.go:130] > # separated by comma.
	I0731 21:00:37.941624 1130033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 21:00:37.941632 1130033 command_runner.go:130] > # gid_mappings = ""
	I0731 21:00:37.941645 1130033 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 21:00:37.941657 1130033 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 21:00:37.941669 1130033 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 21:00:37.941684 1130033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 21:00:37.941694 1130033 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 21:00:37.941705 1130033 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 21:00:37.941714 1130033 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 21:00:37.941726 1130033 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 21:00:37.941742 1130033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 21:00:37.941759 1130033 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 21:00:37.941771 1130033 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 21:00:37.941783 1130033 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 21:00:37.941792 1130033 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 21:00:37.941799 1130033 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 21:00:37.941808 1130033 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 21:00:37.941821 1130033 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 21:00:37.941832 1130033 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 21:00:37.941843 1130033 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 21:00:37.941851 1130033 command_runner.go:130] > drop_infra_ctr = false
	I0731 21:00:37.941863 1130033 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 21:00:37.941874 1130033 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 21:00:37.941883 1130033 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 21:00:37.941892 1130033 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 21:00:37.941907 1130033 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 21:00:37.941919 1130033 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 21:00:37.941931 1130033 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 21:00:37.941942 1130033 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 21:00:37.941951 1130033 command_runner.go:130] > # shared_cpuset = ""
	I0731 21:00:37.941961 1130033 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 21:00:37.941969 1130033 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 21:00:37.941979 1130033 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 21:00:37.941993 1130033 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 21:00:37.942003 1130033 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 21:00:37.942015 1130033 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 21:00:37.942028 1130033 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 21:00:37.942037 1130033 command_runner.go:130] > # enable_criu_support = false
	I0731 21:00:37.942045 1130033 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 21:00:37.942053 1130033 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 21:00:37.942060 1130033 command_runner.go:130] > # enable_pod_events = false
	I0731 21:00:37.942073 1130033 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 21:00:37.942085 1130033 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 21:00:37.942097 1130033 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 21:00:37.942106 1130033 command_runner.go:130] > # default_runtime = "runc"
	I0731 21:00:37.942117 1130033 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 21:00:37.942131 1130033 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 21:00:37.942144 1130033 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 21:00:37.942158 1130033 command_runner.go:130] > # creation as a file is not desired either.
	I0731 21:00:37.942174 1130033 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 21:00:37.942184 1130033 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 21:00:37.942194 1130033 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 21:00:37.942202 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.942215 1130033 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 21:00:37.942224 1130033 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 21:00:37.942237 1130033 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 21:00:37.942249 1130033 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 21:00:37.942258 1130033 command_runner.go:130] > #
	I0731 21:00:37.942266 1130033 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 21:00:37.942276 1130033 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 21:00:37.942303 1130033 command_runner.go:130] > # runtime_type = "oci"
	I0731 21:00:37.942311 1130033 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 21:00:37.942321 1130033 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 21:00:37.942332 1130033 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 21:00:37.942343 1130033 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 21:00:37.942352 1130033 command_runner.go:130] > # monitor_env = []
	I0731 21:00:37.942363 1130033 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 21:00:37.942375 1130033 command_runner.go:130] > # allowed_annotations = []
	I0731 21:00:37.942386 1130033 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 21:00:37.942392 1130033 command_runner.go:130] > # Where:
	I0731 21:00:37.942399 1130033 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 21:00:37.942412 1130033 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 21:00:37.942424 1130033 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 21:00:37.942436 1130033 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 21:00:37.942445 1130033 command_runner.go:130] > #   in $PATH.
	I0731 21:00:37.942458 1130033 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 21:00:37.942469 1130033 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 21:00:37.942478 1130033 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 21:00:37.942486 1130033 command_runner.go:130] > #   state.
	I0731 21:00:37.942505 1130033 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 21:00:37.942513 1130033 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 21:00:37.942522 1130033 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 21:00:37.942533 1130033 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 21:00:37.942545 1130033 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 21:00:37.942557 1130033 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 21:00:37.942569 1130033 command_runner.go:130] > #   The currently recognized values are:
	I0731 21:00:37.942582 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 21:00:37.942600 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 21:00:37.942612 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 21:00:37.942624 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 21:00:37.942638 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 21:00:37.942648 1130033 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 21:00:37.942660 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 21:00:37.942674 1130033 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 21:00:37.942688 1130033 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 21:00:37.942701 1130033 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 21:00:37.942712 1130033 command_runner.go:130] > #   deprecated option "conmon".
	I0731 21:00:37.942726 1130033 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 21:00:37.942734 1130033 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 21:00:37.942745 1130033 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 21:00:37.942755 1130033 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 21:00:37.942768 1130033 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 21:00:37.942779 1130033 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 21:00:37.942792 1130033 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 21:00:37.942803 1130033 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 21:00:37.942812 1130033 command_runner.go:130] > #
	I0731 21:00:37.942821 1130033 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 21:00:37.942825 1130033 command_runner.go:130] > #
	I0731 21:00:37.942836 1130033 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 21:00:37.942849 1130033 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 21:00:37.942857 1130033 command_runner.go:130] > #
	I0731 21:00:37.942870 1130033 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 21:00:37.942884 1130033 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 21:00:37.942892 1130033 command_runner.go:130] > #
	I0731 21:00:37.942904 1130033 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 21:00:37.942911 1130033 command_runner.go:130] > # feature.
	I0731 21:00:37.942914 1130033 command_runner.go:130] > #
	I0731 21:00:37.942926 1130033 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 21:00:37.942939 1130033 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 21:00:37.942952 1130033 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 21:00:37.942968 1130033 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 21:00:37.942980 1130033 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 21:00:37.942988 1130033 command_runner.go:130] > #
	I0731 21:00:37.942996 1130033 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 21:00:37.943006 1130033 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 21:00:37.943013 1130033 command_runner.go:130] > #
	I0731 21:00:37.943024 1130033 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 21:00:37.943036 1130033 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 21:00:37.943043 1130033 command_runner.go:130] > #
	I0731 21:00:37.943054 1130033 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 21:00:37.943067 1130033 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 21:00:37.943076 1130033 command_runner.go:130] > # limitation.
	I0731 21:00:37.943082 1130033 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 21:00:37.943091 1130033 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 21:00:37.943100 1130033 command_runner.go:130] > runtime_type = "oci"
	I0731 21:00:37.943110 1130033 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 21:00:37.943120 1130033 command_runner.go:130] > runtime_config_path = ""
	I0731 21:00:37.943138 1130033 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 21:00:37.943148 1130033 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 21:00:37.943157 1130033 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 21:00:37.943164 1130033 command_runner.go:130] > monitor_env = [
	I0731 21:00:37.943170 1130033 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 21:00:37.943177 1130033 command_runner.go:130] > ]
	I0731 21:00:37.943188 1130033 command_runner.go:130] > privileged_without_host_devices = false
	I0731 21:00:37.943201 1130033 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 21:00:37.943213 1130033 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 21:00:37.943226 1130033 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 21:00:37.943242 1130033 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 21:00:37.943253 1130033 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 21:00:37.943264 1130033 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 21:00:37.943282 1130033 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 21:00:37.943297 1130033 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 21:00:37.943306 1130033 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 21:00:37.943386 1130033 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 21:00:37.943426 1130033 command_runner.go:130] > # Example:
	I0731 21:00:37.943436 1130033 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 21:00:37.943444 1130033 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 21:00:37.943466 1130033 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 21:00:37.943474 1130033 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 21:00:37.943482 1130033 command_runner.go:130] > # cpuset = 0
	I0731 21:00:37.943487 1130033 command_runner.go:130] > # cpushares = "0-1"
	I0731 21:00:37.943491 1130033 command_runner.go:130] > # Where:
	I0731 21:00:37.943496 1130033 command_runner.go:130] > # The workload name is workload-type.
	I0731 21:00:37.943508 1130033 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 21:00:37.943518 1130033 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 21:00:37.943526 1130033 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 21:00:37.943539 1130033 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 21:00:37.943547 1130033 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 21:00:37.943558 1130033 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 21:00:37.943572 1130033 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 21:00:37.943588 1130033 command_runner.go:130] > # Default value is set to true
	I0731 21:00:37.943599 1130033 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 21:00:37.943611 1130033 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 21:00:37.943621 1130033 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 21:00:37.943631 1130033 command_runner.go:130] > # Default value is set to 'false'
	I0731 21:00:37.943638 1130033 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 21:00:37.943645 1130033 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 21:00:37.943651 1130033 command_runner.go:130] > #
	I0731 21:00:37.943657 1130033 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 21:00:37.943665 1130033 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 21:00:37.943673 1130033 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 21:00:37.943683 1130033 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 21:00:37.943691 1130033 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 21:00:37.943697 1130033 command_runner.go:130] > [crio.image]
	I0731 21:00:37.943703 1130033 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 21:00:37.943711 1130033 command_runner.go:130] > # default_transport = "docker://"
	I0731 21:00:37.943717 1130033 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 21:00:37.943725 1130033 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 21:00:37.943732 1130033 command_runner.go:130] > # global_auth_file = ""
	I0731 21:00:37.943737 1130033 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 21:00:37.943744 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.943749 1130033 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 21:00:37.943755 1130033 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 21:00:37.943762 1130033 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 21:00:37.943767 1130033 command_runner.go:130] > # This option supports live configuration reload.
	I0731 21:00:37.943777 1130033 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 21:00:37.943787 1130033 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 21:00:37.943800 1130033 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 21:00:37.943810 1130033 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 21:00:37.943818 1130033 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 21:00:37.943823 1130033 command_runner.go:130] > # pause_command = "/pause"
	I0731 21:00:37.943833 1130033 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 21:00:37.943841 1130033 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 21:00:37.943849 1130033 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 21:00:37.943863 1130033 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 21:00:37.943872 1130033 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 21:00:37.943880 1130033 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 21:00:37.943884 1130033 command_runner.go:130] > # pinned_images = [
	I0731 21:00:37.943888 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.943894 1130033 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 21:00:37.943903 1130033 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 21:00:37.943911 1130033 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 21:00:37.943919 1130033 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 21:00:37.943925 1130033 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 21:00:37.943930 1130033 command_runner.go:130] > # signature_policy = ""
	I0731 21:00:37.943935 1130033 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 21:00:37.943943 1130033 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 21:00:37.943950 1130033 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 21:00:37.943958 1130033 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 21:00:37.943964 1130033 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 21:00:37.943971 1130033 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 21:00:37.943977 1130033 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 21:00:37.943985 1130033 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 21:00:37.943992 1130033 command_runner.go:130] > # changing them here.
	I0731 21:00:37.943996 1130033 command_runner.go:130] > # insecure_registries = [
	I0731 21:00:37.944001 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.944007 1130033 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 21:00:37.944012 1130033 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 21:00:37.944016 1130033 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 21:00:37.944023 1130033 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 21:00:37.944027 1130033 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 21:00:37.944038 1130033 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 21:00:37.944045 1130033 command_runner.go:130] > # CNI plugins.
	I0731 21:00:37.944048 1130033 command_runner.go:130] > [crio.network]
	I0731 21:00:37.944057 1130033 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 21:00:37.944062 1130033 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 21:00:37.944068 1130033 command_runner.go:130] > # cni_default_network = ""
	I0731 21:00:37.944075 1130033 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 21:00:37.944082 1130033 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 21:00:37.944103 1130033 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 21:00:37.944115 1130033 command_runner.go:130] > # plugin_dirs = [
	I0731 21:00:37.944119 1130033 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 21:00:37.944125 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.944131 1130033 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 21:00:37.944136 1130033 command_runner.go:130] > [crio.metrics]
	I0731 21:00:37.944141 1130033 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 21:00:37.944148 1130033 command_runner.go:130] > enable_metrics = true
	I0731 21:00:37.944152 1130033 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 21:00:37.944159 1130033 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 21:00:37.944165 1130033 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 21:00:37.944177 1130033 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 21:00:37.944190 1130033 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 21:00:37.944199 1130033 command_runner.go:130] > # metrics_collectors = [
	I0731 21:00:37.944207 1130033 command_runner.go:130] > # 	"operations",
	I0731 21:00:37.944212 1130033 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 21:00:37.944218 1130033 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 21:00:37.944222 1130033 command_runner.go:130] > # 	"operations_errors",
	I0731 21:00:37.944229 1130033 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 21:00:37.944234 1130033 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 21:00:37.944240 1130033 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 21:00:37.944245 1130033 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 21:00:37.944251 1130033 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 21:00:37.944255 1130033 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 21:00:37.944262 1130033 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 21:00:37.944266 1130033 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 21:00:37.944272 1130033 command_runner.go:130] > # 	"containers_oom_total",
	I0731 21:00:37.944277 1130033 command_runner.go:130] > # 	"containers_oom",
	I0731 21:00:37.944282 1130033 command_runner.go:130] > # 	"processes_defunct",
	I0731 21:00:37.944286 1130033 command_runner.go:130] > # 	"operations_total",
	I0731 21:00:37.944291 1130033 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 21:00:37.944297 1130033 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 21:00:37.944301 1130033 command_runner.go:130] > # 	"operations_errors_total",
	I0731 21:00:37.944308 1130033 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 21:00:37.944314 1130033 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 21:00:37.944320 1130033 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 21:00:37.944325 1130033 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 21:00:37.944335 1130033 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 21:00:37.944341 1130033 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 21:00:37.944347 1130033 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 21:00:37.944353 1130033 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 21:00:37.944356 1130033 command_runner.go:130] > # ]
	I0731 21:00:37.944363 1130033 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 21:00:37.944370 1130033 command_runner.go:130] > # metrics_port = 9090
	I0731 21:00:37.944375 1130033 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 21:00:37.944381 1130033 command_runner.go:130] > # metrics_socket = ""
	I0731 21:00:37.944386 1130033 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 21:00:37.944394 1130033 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 21:00:37.944401 1130033 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 21:00:37.944408 1130033 command_runner.go:130] > # certificate on any modification event.
	I0731 21:00:37.944414 1130033 command_runner.go:130] > # metrics_cert = ""
	I0731 21:00:37.944423 1130033 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 21:00:37.944430 1130033 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 21:00:37.944434 1130033 command_runner.go:130] > # metrics_key = ""
	I0731 21:00:37.944441 1130033 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 21:00:37.944448 1130033 command_runner.go:130] > [crio.tracing]
	I0731 21:00:37.944454 1130033 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 21:00:37.944460 1130033 command_runner.go:130] > # enable_tracing = false
	I0731 21:00:37.944465 1130033 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 21:00:37.944469 1130033 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 21:00:37.944477 1130033 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 21:00:37.944483 1130033 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 21:00:37.944488 1130033 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 21:00:37.944503 1130033 command_runner.go:130] > [crio.nri]
	I0731 21:00:37.944508 1130033 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 21:00:37.944515 1130033 command_runner.go:130] > # enable_nri = false
	I0731 21:00:37.944519 1130033 command_runner.go:130] > # NRI socket to listen on.
	I0731 21:00:37.944525 1130033 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 21:00:37.944530 1130033 command_runner.go:130] > # NRI plugin directory to use.
	I0731 21:00:37.944537 1130033 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 21:00:37.944543 1130033 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 21:00:37.944550 1130033 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 21:00:37.944556 1130033 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 21:00:37.944562 1130033 command_runner.go:130] > # nri_disable_connections = false
	I0731 21:00:37.944567 1130033 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 21:00:37.944577 1130033 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 21:00:37.944582 1130033 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 21:00:37.944589 1130033 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 21:00:37.944594 1130033 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 21:00:37.944601 1130033 command_runner.go:130] > [crio.stats]
	I0731 21:00:37.944609 1130033 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 21:00:37.944617 1130033 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 21:00:37.944622 1130033 command_runner.go:130] > # stats_collection_period = 0
	I0731 21:00:37.944751 1130033 cni.go:84] Creating CNI manager for ""
	I0731 21:00:37.944761 1130033 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 21:00:37.944770 1130033 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:00:37.944793 1130033 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.184 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-220043 NodeName:multinode-220043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:00:37.944959 1130033 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-220043"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:00:37.945039 1130033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:00:37.955083 1130033 command_runner.go:130] > kubeadm
	I0731 21:00:37.955111 1130033 command_runner.go:130] > kubectl
	I0731 21:00:37.955118 1130033 command_runner.go:130] > kubelet
	I0731 21:00:37.955141 1130033 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:00:37.955190 1130033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:00:37.965316 1130033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 21:00:37.981934 1130033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:00:37.999106 1130033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0731 21:00:38.016396 1130033 ssh_runner.go:195] Run: grep 192.168.39.184	control-plane.minikube.internal$ /etc/hosts
	I0731 21:00:38.020492 1130033 command_runner.go:130] > 192.168.39.184	control-plane.minikube.internal
	I0731 21:00:38.020669 1130033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:38.156809 1130033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:00:38.171786 1130033 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043 for IP: 192.168.39.184
	I0731 21:00:38.171813 1130033 certs.go:194] generating shared ca certs ...
	I0731 21:00:38.171837 1130033 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:38.172045 1130033 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:00:38.172118 1130033 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:00:38.172134 1130033 certs.go:256] generating profile certs ...
	I0731 21:00:38.172244 1130033 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/client.key
	I0731 21:00:38.172329 1130033 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.key.bba98ef8
	I0731 21:00:38.172370 1130033 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.key
	I0731 21:00:38.172382 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 21:00:38.172403 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 21:00:38.172421 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 21:00:38.172438 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 21:00:38.172453 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 21:00:38.172472 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 21:00:38.172491 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 21:00:38.172508 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 21:00:38.172594 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:00:38.172642 1130033 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:00:38.172655 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:00:38.172686 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:00:38.172717 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:00:38.172749 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:00:38.172803 1130033 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:00:38.172849 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem -> /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.172870 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.172890 1130033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.173579 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:00:38.197578 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:00:38.221931 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:00:38.248197 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:00:38.276634 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:00:38.303821 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:00:38.329821 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:00:38.353703 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/multinode-220043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:00:38.377229 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:00:38.401179 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:00:38.424074 1130033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:00:38.448770 1130033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:00:38.467964 1130033 ssh_runner.go:195] Run: openssl version
	I0731 21:00:38.473611 1130033 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 21:00:38.473860 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:00:38.486279 1130033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.490657 1130033 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.490907 1130033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.490975 1130033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:00:38.496523 1130033 command_runner.go:130] > 3ec20f2e
	I0731 21:00:38.496885 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:00:38.507528 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:00:38.519695 1130033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.524099 1130033 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.524192 1130033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.524238 1130033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:00:38.529780 1130033 command_runner.go:130] > b5213941
	I0731 21:00:38.529863 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:00:38.540821 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:00:38.553100 1130033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.557992 1130033 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.558035 1130033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.558081 1130033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:00:38.563979 1130033 command_runner.go:130] > 51391683
	I0731 21:00:38.564183 1130033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:00:38.575782 1130033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:00:38.580437 1130033 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:00:38.580466 1130033 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 21:00:38.580473 1130033 command_runner.go:130] > Device: 253,1	Inode: 533291      Links: 1
	I0731 21:00:38.580479 1130033 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 21:00:38.580485 1130033 command_runner.go:130] > Access: 2024-07-31 20:53:47.323432688 +0000
	I0731 21:00:38.580503 1130033 command_runner.go:130] > Modify: 2024-07-31 20:53:47.323432688 +0000
	I0731 21:00:38.580509 1130033 command_runner.go:130] > Change: 2024-07-31 20:53:47.323432688 +0000
	I0731 21:00:38.580514 1130033 command_runner.go:130] >  Birth: 2024-07-31 20:53:47.323432688 +0000
	I0731 21:00:38.580583 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:00:38.586563 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.586651 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:00:38.592638 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.592744 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:00:38.598372 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.598574 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:00:38.604259 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.604341 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:00:38.609641 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.609862 1130033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:00:38.615675 1130033 command_runner.go:130] > Certificate will not expire
	I0731 21:00:38.615784 1130033 kubeadm.go:392] StartCluster: {Name:multinode-220043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-220043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.66 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:00:38.615899 1130033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:00:38.615950 1130033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:00:38.651337 1130033 command_runner.go:130] > 5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff
	I0731 21:00:38.651363 1130033 command_runner.go:130] > 84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d
	I0731 21:00:38.651369 1130033 command_runner.go:130] > 006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226
	I0731 21:00:38.651378 1130033 command_runner.go:130] > 3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250
	I0731 21:00:38.651383 1130033 command_runner.go:130] > 4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815
	I0731 21:00:38.651389 1130033 command_runner.go:130] > 42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b
	I0731 21:00:38.651393 1130033 command_runner.go:130] > a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e
	I0731 21:00:38.651400 1130033 command_runner.go:130] > 135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109
	I0731 21:00:38.651419 1130033 cri.go:89] found id: "5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff"
	I0731 21:00:38.651427 1130033 cri.go:89] found id: "84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d"
	I0731 21:00:38.651432 1130033 cri.go:89] found id: "006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226"
	I0731 21:00:38.651436 1130033 cri.go:89] found id: "3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250"
	I0731 21:00:38.651443 1130033 cri.go:89] found id: "4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815"
	I0731 21:00:38.651448 1130033 cri.go:89] found id: "42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b"
	I0731 21:00:38.651453 1130033 cri.go:89] found id: "a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e"
	I0731 21:00:38.651457 1130033 cri.go:89] found id: "135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109"
	I0731 21:00:38.651462 1130033 cri.go:89] found id: ""
	I0731 21:00:38.651511 1130033 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.719995051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=495100ef-3ce4-41bb-b613-036ea757132e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.721171396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f595e5b9-ebd3-4db8-b35a-c47d389722f8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.721590322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722459888721568035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f595e5b9-ebd3-4db8-b35a-c47d389722f8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.722203560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa23db32-d856-45fe-be8d-13b8633af5af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.722287325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa23db32-d856-45fe-be8d-13b8633af5af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.722621893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91fa2d31eb3bf57248ee8dee32d6626746acf8f99ec50be661d0d6af05d5ef1,PodSandboxId:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722459678360268911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d,PodSandboxId:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722459644888488667,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622,PodSandboxId:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459644755998288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89ed7025c9fb0599872797bcee031ebdacdc548b64f6a4dfc9319c6530efec8,PodSandboxId:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459644641777771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},An
notations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5,PodSandboxId:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459644569826829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea,PodSandboxId:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459640801561766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820,PodSandboxId:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459640837450629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1,PodSandboxId:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459640803911886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab,PodSandboxId:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459640749940745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b129e1cbb75cd30d5c3d067ab0cf62bc01bcd51ac769c473cf160d6eb7b13c10,PodSandboxId:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722459321241917993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff,PodSandboxId:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722459267020002295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d,PodSandboxId:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459266942781478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226,PodSandboxId:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722459255209920623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250,PodSandboxId:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722459251874507730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.kubernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815,PodSandboxId:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722459231673468827,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b,PodSandboxId:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9bc23dbd4bc101,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722459231669116557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e,PodSandboxId:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722459231661592972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,
},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109,PodSandboxId:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722459231499439736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa23db32-d856-45fe-be8d-13b8633af5af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.756008032Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=fc8a7c60-b9e2-40e4-8c09-1945e783e593 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.756366119Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6q6qp,Uid:d932eb77-1509-4fc7-a3ab-7315556707b0,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459678218351844,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:00:44.086084578Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nl9gd,Uid:d4a24288-5134-4044-9ca6-a310ea329b72,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1722459644488250039,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:00:44.086072695Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1cf5142c-160e-4228-81ec-e47c2de8701d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459644453593792,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T21:00:44.086083287Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&PodSandboxMetadata{Name:kindnet-dnshn,Uid:096976bc-3005-4c8d-88a7-da32abefc439,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1722459644434641211,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:00:44.086078436Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-fk7mt,Uid:74dfe9b6-475f-4ff0-899a-24eaccd3540f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459644400086588,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:00:44.086081274Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-220043,Uid:41f86a014ebc23353f11c3fa77ec1699,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459640614247957,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 41f86a014ebc23353f11c3fa77ec1699,kubernetes.io/config.seen: 2024-07-31T21:00:40.081792063Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multi
node-220043,Uid:e19e708c02bfd2fbbc2583d15a2e1da3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459640605011678,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.184:8443,kubernetes.io/config.hash: e19e708c02bfd2fbbc2583d15a2e1da3,kubernetes.io/config.seen: 2024-07-31T21:00:40.081788872Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-220043,Uid:db6c6716326d3b720901c9a477dd8c3b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459640601516278,Labels:map[string]string{component: kube-controller-manager,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: db6c6716326d3b720901c9a477dd8c3b,kubernetes.io/config.seen: 2024-07-31T21:00:40.081790859Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-220043,Uid:83c07f69f3feae47ea13fe4158390271,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459640594171466,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.184:2379,kuberne
tes.io/config.hash: 83c07f69f3feae47ea13fe4158390271,kubernetes.io/config.seen: 2024-07-31T21:00:40.081735952Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6q6qp,Uid:d932eb77-1509-4fc7-a3ab-7315556707b0,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459319619586436,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:55:19.306600470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1cf5142c-160e-4228-81ec-e47c2de8701d,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1722459266804858566,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T20:54:26.493044778Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nl9gd,Uid:d4a24288-5134-4044-9ca6-a310ea329b72,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459266794821164,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:54:26.488424802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&PodSandboxMetadata{Name:kindnet-dnshn,Uid:096976bc-3005-4c8d-88a7-da32abefc439,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459251612290683,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:54:11.276366038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&PodSandboxMetadata{Name:kube-proxy-fk7mt,Uid:74dfe9b6-475f-4ff0-899a-24eaccd3540f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459251591887462,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:54:11.267520757Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&PodSandboxMetadata{Name:etcd-multinode-220043,Uid:83c07f69f3feae47ea13fe4158390271,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459231295051518,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.184:2379,kubernetes.io/config.hash: 83c07f69f3feae47ea13fe4158390271,kubernetes.io/config.seen: 2024-07-31T20:53:50.797201282Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9
bc23dbd4bc101,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-220043,Uid:db6c6716326d3b720901c9a477dd8c3b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459231287624391,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: db6c6716326d3b720901c9a477dd8c3b,kubernetes.io/config.seen: 2024-07-31T20:53:50.797199336Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-220043,Uid:41f86a014ebc23353f11c3fa77ec1699,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459231269795541,Labels:map[string]string{component: kube-scheduler,io.kuberne
tes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 41f86a014ebc23353f11c3fa77ec1699,kubernetes.io/config.seen: 2024-07-31T20:53:50.797200273Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-220043,Uid:e19e708c02bfd2fbbc2583d15a2e1da3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459231267507977,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint
: 192.168.39.184:8443,kubernetes.io/config.hash: e19e708c02bfd2fbbc2583d15a2e1da3,kubernetes.io/config.seen: 2024-07-31T20:53:50.797194573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fc8a7c60-b9e2-40e4-8c09-1945e783e593 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.757616887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=267c668a-1814-42d9-8c95-7c7183042d1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.757722850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=267c668a-1814-42d9-8c95-7c7183042d1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.758105000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91fa2d31eb3bf57248ee8dee32d6626746acf8f99ec50be661d0d6af05d5ef1,PodSandboxId:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722459678360268911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d,PodSandboxId:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722459644888488667,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622,PodSandboxId:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459644755998288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89ed7025c9fb0599872797bcee031ebdacdc548b64f6a4dfc9319c6530efec8,PodSandboxId:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459644641777771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},An
notations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5,PodSandboxId:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459644569826829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea,PodSandboxId:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459640801561766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820,PodSandboxId:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459640837450629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1,PodSandboxId:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459640803911886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab,PodSandboxId:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459640749940745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b129e1cbb75cd30d5c3d067ab0cf62bc01bcd51ac769c473cf160d6eb7b13c10,PodSandboxId:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722459321241917993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff,PodSandboxId:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722459267020002295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d,PodSandboxId:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459266942781478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226,PodSandboxId:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722459255209920623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250,PodSandboxId:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722459251874507730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.kubernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815,PodSandboxId:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722459231673468827,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b,PodSandboxId:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9bc23dbd4bc101,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722459231669116557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e,PodSandboxId:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722459231661592972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,
},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109,PodSandboxId:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722459231499439736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=267c668a-1814-42d9-8c95-7c7183042d1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.766285333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f81a4bb2-9987-4205-a8d6-adcf8be1c130 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.766353930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f81a4bb2-9987-4205-a8d6-adcf8be1c130 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.767414776Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8c65bc3-8070-418d-9a96-ef122f59614d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.767947916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722459888767919919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8c65bc3-8070-418d-9a96-ef122f59614d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.768612039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcd6894f-e46d-4c65-ad1a-b48d7af5fa29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.768683422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcd6894f-e46d-4c65-ad1a-b48d7af5fa29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.769117013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91fa2d31eb3bf57248ee8dee32d6626746acf8f99ec50be661d0d6af05d5ef1,PodSandboxId:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722459678360268911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d,PodSandboxId:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722459644888488667,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622,PodSandboxId:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459644755998288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89ed7025c9fb0599872797bcee031ebdacdc548b64f6a4dfc9319c6530efec8,PodSandboxId:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459644641777771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},An
notations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5,PodSandboxId:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459644569826829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea,PodSandboxId:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459640801561766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820,PodSandboxId:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459640837450629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1,PodSandboxId:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459640803911886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab,PodSandboxId:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459640749940745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b129e1cbb75cd30d5c3d067ab0cf62bc01bcd51ac769c473cf160d6eb7b13c10,PodSandboxId:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722459321241917993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff,PodSandboxId:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722459267020002295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d,PodSandboxId:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459266942781478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226,PodSandboxId:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722459255209920623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250,PodSandboxId:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722459251874507730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.kubernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815,PodSandboxId:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722459231673468827,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b,PodSandboxId:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9bc23dbd4bc101,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722459231669116557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e,PodSandboxId:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722459231661592972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,
},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109,PodSandboxId:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722459231499439736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcd6894f-e46d-4c65-ad1a-b48d7af5fa29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.816534568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a9dfca4-8e60-4311-9023-34214f4a1d5e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.816706189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a9dfca4-8e60-4311-9023-34214f4a1d5e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.818221751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7fd8e5c-95cd-4544-abee-c47785d36e27 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.818654748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722459888818629904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7fd8e5c-95cd-4544-abee-c47785d36e27 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.820373505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=530fc944-d9a0-457f-84e3-47c0d5c96742 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.820432807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=530fc944-d9a0-457f-84e3-47c0d5c96742 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:04:48 multinode-220043 crio[2868]: time="2024-07-31 21:04:48.821040659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e91fa2d31eb3bf57248ee8dee32d6626746acf8f99ec50be661d0d6af05d5ef1,PodSandboxId:b2641b6a2dd0767af6c053a7bdbdea95076ddd7b72bf405896b5753f0da1329a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722459678360268911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d,PodSandboxId:750c635ae9cb3820ff571228f9f0c421f2e2ea26c882a3c7264d159b29cd22e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722459644888488667,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622,PodSandboxId:78b6e70cf4ae0d46a6f08ff546cc61ee8d1456a1fca4e91117a719c6aa205320,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459644755998288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89ed7025c9fb0599872797bcee031ebdacdc548b64f6a4dfc9319c6530efec8,PodSandboxId:1c6cc2200999b6018e454f6394a6257d9fe17e26e4fe6efee9f996b5d9190553,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459644641777771,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},An
notations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5,PodSandboxId:102cb9e816e117e06d287c95d53f91b762b6b0cf853f40d1cc605ee51edf98e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459644569826829,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.ku
bernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea,PodSandboxId:cccc2114a9ae4380b8b7d1e26925cd5989c7dde7c293192bb97a179368605fd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459640801561766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820,PodSandboxId:ed61727ff3063f0079126227cd2134e3bfd2de6dfce82cf35c0fb45406da51a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459640837450629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1,PodSandboxId:fe6268d8b75d33f536821f4b7d5d3ea858d4b97b461d4693309347bc4977e9da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459640803911886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab,PodSandboxId:8f651a7dd37fc0a9f7d8f82afea0de6af8c3f82f3bd8d3af4ffb6b2b53ac080e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459640749940745,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b129e1cbb75cd30d5c3d067ab0cf62bc01bcd51ac769c473cf160d6eb7b13c10,PodSandboxId:2146fff12e8f882677bf90336a3bd8e4f174c63130beb21fbbcf4d0b675421bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722459321241917993,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6q6qp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d932eb77-1509-4fc7-a3ab-7315556707b0,},Annotations:map[string]string{io.kubernetes.container.hash: da145cf7,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff,PodSandboxId:46d56e0cd6a9383b4d2ce1155b5057e1f36664a0787b20bc719a6e819b3ff716,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722459267020002295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nl9gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a24288-5134-4044-9ca6-a310ea329b72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a764834,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a67e26466d46af8ff953329256a6712206864da63d46e4e83b0f1087bf2a4d,PodSandboxId:764fe9a141516e6cce064a67af470d124ef6f2051fb333c42dd73d380f2828de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459266942781478,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 1cf5142c-160e-4228-81ec-e47c2de8701d,},Annotations:map[string]string{io.kubernetes.container.hash: 5cf4d7f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226,PodSandboxId:705bafc71f35ca30f8f2b9237c1c4b1880c04853dc175f6aee6f33a3065b3fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722459255209920623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dnshn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 096976bc-3005-4c8d-88a7-da32abefc439,},Annotations:map[string]string{io.kubernetes.container.hash: e0d349ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250,PodSandboxId:50d1ba3d1a7da3db27cacb59406b755d22c346006a37e1808d9b9a52a9e79e4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722459251874507730,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mt,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 74dfe9b6-475f-4ff0-899a-24eaccd3540f,},Annotations:map[string]string{io.kubernetes.container.hash: 5eafec3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815,PodSandboxId:efcf0a24ebb9267f504793676ce07a86d0237443a6df6929c45e6614aa6a4291,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722459231673468827,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
41f86a014ebc23353f11c3fa77ec1699,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b,PodSandboxId:51a79137efba6e651bfe0509413245ef1e38c236d9b4ec1b5b9bc23dbd4bc101,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722459231669116557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: db6c6716326d3b720901c9a477dd8c3b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e,PodSandboxId:be0f2440464759e9d44a447eeeda329423805547225184fa780b0a9152f74d2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722459231661592972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c07f69f3feae47ea13fe4158390271,
},Annotations:map[string]string{io.kubernetes.container.hash: b1cf2190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109,PodSandboxId:a5bef938fe9871371bf34e01d8649dcf4dc3f561a28e29f1ba4b3d14ed726f7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722459231499439736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-220043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19e708c02bfd2fbbc2583d15a2e1da3,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6a163873,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=530fc944-d9a0-457f-84e3-47c0d5c96742 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e91fa2d31eb3b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   b2641b6a2dd07       busybox-fc5497c4f-6q6qp
	c68d47dc8c0a5       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   750c635ae9cb3       kindnet-dnshn
	acca3e1ed045c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   78b6e70cf4ae0       coredns-7db6d8ff4d-nl9gd
	e89ed7025c9fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   1c6cc2200999b       storage-provisioner
	f1075b8b2253e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   102cb9e816e11       kube-proxy-fk7mt
	677830e9554b3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   ed61727ff3063       etcd-multinode-220043
	8450b5d7a0ec4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   fe6268d8b75d3       kube-apiserver-multinode-220043
	bea8448ffa5ac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   cccc2114a9ae4       kube-scheduler-multinode-220043
	bc290d47eb9a6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   8f651a7dd37fc       kube-controller-manager-multinode-220043
	b129e1cbb75cd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   2146fff12e8f8       busybox-fc5497c4f-6q6qp
	5c8b4d91d3a89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   46d56e0cd6a93       coredns-7db6d8ff4d-nl9gd
	84a67e26466d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   764fe9a141516       storage-provisioner
	006d91418c209       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   705bafc71f35c       kindnet-dnshn
	3366da9a1a344       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   50d1ba3d1a7da       kube-proxy-fk7mt
	4789555cefe12       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   efcf0a24ebb92       kube-scheduler-multinode-220043
	42a835a7cd718       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   51a79137efba6       kube-controller-manager-multinode-220043
	a018ca65938ad       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   be0f244046475       etcd-multinode-220043
	135e3a794a671       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   a5bef938fe987       kube-apiserver-multinode-220043
	
	
	==> coredns [5c8b4d91d3a898e2e82d0a2e0beb89871c2785387ddde851d641376bce6e3fff] <==
	[INFO] 10.244.1.2:37910 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00192437s
	[INFO] 10.244.1.2:48874 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159033s
	[INFO] 10.244.1.2:53899 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145334s
	[INFO] 10.244.1.2:45189 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001447127s
	[INFO] 10.244.1.2:56731 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073554s
	[INFO] 10.244.1.2:54665 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068413s
	[INFO] 10.244.1.2:35044 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069632s
	[INFO] 10.244.0.3:41195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076516s
	[INFO] 10.244.0.3:44592 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040464s
	[INFO] 10.244.0.3:53053 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033754s
	[INFO] 10.244.0.3:56475 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057414s
	[INFO] 10.244.1.2:60401 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103334s
	[INFO] 10.244.1.2:43267 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069481s
	[INFO] 10.244.1.2:46759 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066157s
	[INFO] 10.244.1.2:37235 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063279s
	[INFO] 10.244.0.3:36517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095833s
	[INFO] 10.244.0.3:59788 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094153s
	[INFO] 10.244.0.3:47975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086171s
	[INFO] 10.244.0.3:33465 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066933s
	[INFO] 10.244.1.2:59323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011084s
	[INFO] 10.244.1.2:40674 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164319s
	[INFO] 10.244.1.2:56217 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073787s
	[INFO] 10.244.1.2:44710 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063369s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [acca3e1ed045c397f0c2185a3b71983b4463e52217e63508a076855ee1a2a622] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56998 - 52418 "HINFO IN 360002067607903876.7109424447820596251. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020937782s
	
	
	==> describe nodes <==
	Name:               multinode-220043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-220043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=multinode-220043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_53_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:53:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-220043
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:04:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:00:43 +0000   Wed, 31 Jul 2024 20:53:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:00:43 +0000   Wed, 31 Jul 2024 20:53:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:00:43 +0000   Wed, 31 Jul 2024 20:53:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:00:43 +0000   Wed, 31 Jul 2024 20:54:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    multinode-220043
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc97b33a023c4b5f9cb1c356ee5766ba
	  System UUID:                bc97b33a-023c-4b5f-9cb1-c356ee5766ba
	  Boot ID:                    c6913746-254d-474c-a7f6-c153c0501375
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6q6qp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 coredns-7db6d8ff4d-nl9gd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-220043                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-dnshn                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-220043             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-220043    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-fk7mt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-220043             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-220043 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-220043 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-220043 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-220043 event: Registered Node multinode-220043 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-220043 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-220043 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-220043 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-220043 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                node-controller  Node multinode-220043 event: Registered Node multinode-220043 in Controller
	
	
	Name:               multinode-220043-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-220043-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=multinode-220043
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T21_01_25_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:01:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-220043-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:02:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 21:01:55 +0000   Wed, 31 Jul 2024 21:03:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 21:01:55 +0000   Wed, 31 Jul 2024 21:03:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 21:01:55 +0000   Wed, 31 Jul 2024 21:03:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 21:01:55 +0000   Wed, 31 Jul 2024 21:03:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    multinode-220043-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 572fe7a56be640cc8f1e1a65d2fae511
	  System UUID:                572fe7a5-6be6-40cc-8f1e-1a65d2fae511
	  Boot ID:                    c96a58f6-6967-4e27-b614-e09a07a31b86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9l78d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-zrb57              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m52s
	  kube-system                 kube-proxy-dk6fj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m52s (x3 over 9m52s)  kubelet          Node multinode-220043-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s (x3 over 9m52s)  kubelet          Node multinode-220043-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m52s (x3 over 9m52s)  kubelet          Node multinode-220043-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m32s                  kubelet          Node multinode-220043-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-220043-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-220043-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-220043-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-220043-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-220043-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060748] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.213008] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.127614] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.303419] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.269470] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.069300] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.688816] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.553270] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.012011] systemd-fstab-generator[1282]: Ignoring "noauto" option for root device
	[  +0.085910] kauditd_printk_skb: 41 callbacks suppressed
	[Jul31 20:54] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.727618] systemd-fstab-generator[1472]: Ignoring "noauto" option for root device
	[  +5.151667] kauditd_printk_skb: 59 callbacks suppressed
	[Jul31 20:55] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 21:00] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.143454] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.205078] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.163344] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.286545] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +1.282766] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +1.821513] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +4.602021] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.664419] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.110837] systemd-fstab-generator[3906]: Ignoring "noauto" option for root device
	[Jul31 21:01] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [677830e9554b382ec739854dbc77ce19dc99e6d079e871629bd6116e04466820] <==
	{"level":"info","ts":"2024-07-31T21:00:41.224847Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:00:41.223843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-07-31T21:00:41.224961Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-07-31T21:00:41.223928Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T21:00:41.224039Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:00:41.225074Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:00:41.225105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:00:41.224397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea switched to configuration voters=(10993975698582176490)"}
	{"level":"info","ts":"2024-07-31T21:00:41.227993Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","added-peer-id":"989272a6374482ea","added-peer-peer-urls":["https://192.168.39.184:2380"]}
	{"level":"info","ts":"2024-07-31T21:00:41.22901Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:00:41.232611Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:00:42.644283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:00:42.644394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:00:42.644441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgPreVoteResp from 989272a6374482ea at term 2"}
	{"level":"info","ts":"2024-07-31T21:00:42.644471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:00:42.644553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgVoteResp from 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-07-31T21:00:42.644585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:00:42.64461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 989272a6374482ea elected leader 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-07-31T21:00:42.650978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:00:42.650935Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"989272a6374482ea","local-member-attributes":"{Name:multinode-220043 ClientURLs:[https://192.168.39.184:2379]}","request-path":"/0/members/989272a6374482ea/attributes","cluster-id":"e6ef3f762f24aa4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:00:42.652001Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:00:42.652228Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:00:42.652257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:00:42.65309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:00:42.653706Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.184:2379"}
	
	
	==> etcd [a018ca65938ad9c19a2c695ded2cfb0d2c89e6d8ab6de39a7cd06805f2ca924e] <==
	{"level":"info","ts":"2024-07-31T20:54:57.381137Z","caller":"traceutil/trace.go:171","msg":"trace[2027806418] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:462; }","duration":"126.83811ms","start":"2024-07-31T20:54:57.254252Z","end":"2024-07-31T20:54:57.38109Z","steps":["trace[2027806418] 'read index received'  (duration: 39.89µs)","trace[2027806418] 'applied index is now lower than readState.Index'  (duration: 126.796073ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:54:57.381434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.178429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-31T20:54:57.382673Z","caller":"traceutil/trace.go:171","msg":"trace[1317591629] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:441; }","duration":"128.442791ms","start":"2024-07-31T20:54:57.254208Z","end":"2024-07-31T20:54:57.382651Z","steps":["trace[1317591629] 'agreement among raft nodes before linearized reading'  (duration: 127.138732ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:55:52.066861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.012314ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9433511844067669587 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-220043-m03.17e767a78cbb4891\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-220043-m03.17e767a78cbb4891\" value_size:646 lease:210139807212893330 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:55:52.067137Z","caller":"traceutil/trace.go:171","msg":"trace[1896681757] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"222.308725ms","start":"2024-07-31T20:55:51.844813Z","end":"2024-07-31T20:55:52.067121Z","steps":["trace[1896681757] 'process raft request'  (duration: 65.982766ms)","trace[1896681757] 'compare'  (duration: 155.863359ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T20:55:52.067498Z","caller":"traceutil/trace.go:171","msg":"trace[481020760] linearizableReadLoop","detail":"{readStateIndex:611; appliedIndex:610; }","duration":"219.504489ms","start":"2024-07-31T20:55:51.847977Z","end":"2024-07-31T20:55:52.067481Z","steps":["trace[481020760] 'read index received'  (duration: 62.82736ms)","trace[481020760] 'applied index is now lower than readState.Index'  (duration: 156.676336ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:55:52.067724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.736696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T20:55:52.072499Z","caller":"traceutil/trace.go:171","msg":"trace[1291209307] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:577; }","duration":"224.520228ms","start":"2024-07-31T20:55:51.84795Z","end":"2024-07-31T20:55:52.07247Z","steps":["trace[1291209307] 'agreement among raft nodes before linearized reading'  (duration: 219.716784ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:55:52.071292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.847297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-220043-m03\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-31T20:55:52.072704Z","caller":"traceutil/trace.go:171","msg":"trace[1781781647] range","detail":"{range_begin:/registry/minions/multinode-220043-m03; range_end:; response_count:1; response_revision:577; }","duration":"125.287651ms","start":"2024-07-31T20:55:51.947406Z","end":"2024-07-31T20:55:52.072693Z","steps":["trace[1781781647] 'agreement among raft nodes before linearized reading'  (duration: 123.832611ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:55:52.073695Z","caller":"traceutil/trace.go:171","msg":"trace[134050814] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"170.240521ms","start":"2024-07-31T20:55:51.897207Z","end":"2024-07-31T20:55:52.067448Z","steps":["trace[134050814] 'process raft request'  (duration: 169.865371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:55:56.36123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.8194ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9433511844067669678 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.184\" mod_revision:564 > success:<request_put:<key:\"/registry/masterleases/192.168.39.184\" value_size:67 lease:210139807212893868 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.184\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:55:56.361398Z","caller":"traceutil/trace.go:171","msg":"trace[877367462] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"188.222659ms","start":"2024-07-31T20:55:56.173163Z","end":"2024-07-31T20:55:56.361385Z","steps":["trace[877367462] 'process raft request'  (duration: 65.189627ms)","trace[877367462] 'compare'  (duration: 122.750255ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:55:56.699187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.591046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-220043-m03\" ","response":"range_response_count:1 size:3228"}
	{"level":"info","ts":"2024-07-31T20:55:56.699461Z","caller":"traceutil/trace.go:171","msg":"trace[1160418171] range","detail":"{range_begin:/registry/minions/multinode-220043-m03; range_end:; response_count:1; response_revision:617; }","duration":"143.884863ms","start":"2024-07-31T20:55:56.555557Z","end":"2024-07-31T20:55:56.699442Z","steps":["trace[1160418171] 'range keys from in-memory index tree'  (duration: 143.42711ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:59:04.779054Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T20:59:04.779121Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-220043","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	{"level":"warn","ts":"2024-07-31T20:59:04.779184Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:59:04.779261Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:59:04.865691Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:59:04.865988Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T20:59:04.866104Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"989272a6374482ea","current-leader-member-id":"989272a6374482ea"}
	{"level":"info","ts":"2024-07-31T20:59:04.868559Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-07-31T20:59:04.868727Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-07-31T20:59:04.868804Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-220043","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	
	
	==> kernel <==
	 21:04:49 up 11 min,  0 users,  load average: 0.14, 0.15, 0.08
	Linux multinode-220043 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [006d91418c209a2fe2603b0f5d1e32649f8a579bb883547a2e557b39b4082226] <==
	I0731 20:58:16.234244       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 20:58:26.241074       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 20:58:26.241179       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 20:58:26.241319       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 20:58:26.241340       1 main.go:299] handling current node
	I0731 20:58:26.241363       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:58:26.241379       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 20:58:36.241051       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:58:36.241094       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 20:58:36.241245       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 20:58:36.241264       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 20:58:36.241325       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 20:58:36.241345       1 main.go:299] handling current node
	I0731 20:58:46.242099       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 20:58:46.242212       1 main.go:299] handling current node
	I0731 20:58:46.242242       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:58:46.242261       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 20:58:46.242394       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 20:58:46.242415       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	I0731 20:58:56.241336       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 20:58:56.241442       1 main.go:299] handling current node
	I0731 20:58:56.241471       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:58:56.241489       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 20:58:56.241616       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0731 20:58:56.241660       1 main.go:322] Node multinode-220043-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c68d47dc8c0a586c1b25f5aaeb51a80f8eebb6c13072282612833049984f476d] <==
	I0731 21:03:45.635332       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:03:55.644175       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:03:55.644275       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:03:55.644436       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:03:55.644469       1 main.go:299] handling current node
	I0731 21:04:05.644114       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:04:05.644165       1 main.go:299] handling current node
	I0731 21:04:05.644183       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:04:05.644192       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:04:15.635834       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:04:15.635954       1 main.go:299] handling current node
	I0731 21:04:15.635982       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:04:15.635999       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:04:25.638714       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:04:25.638790       1 main.go:299] handling current node
	I0731 21:04:25.638806       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:04:25.638812       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:04:35.634809       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:04:35.635206       1 main.go:299] handling current node
	I0731 21:04:35.635245       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:04:35.635255       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	I0731 21:04:45.634827       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0731 21:04:45.634986       1 main.go:299] handling current node
	I0731 21:04:45.635032       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 21:04:45.635061       1 main.go:322] Node multinode-220043-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [135e3a794a6719b3ab7a60da3329bcba13510f4f280a830b926eb76fb9b23109] <==
	I0731 20:53:56.148809       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0731 20:53:56.156781       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.184]
	I0731 20:53:56.158101       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:53:56.163365       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 20:53:56.278571       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 20:53:57.189884       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 20:53:57.221795       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 20:53:57.238106       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 20:54:11.106648       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 20:54:11.177377       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0731 20:55:22.735083       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44402: use of closed network connection
	E0731 20:55:22.907921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44418: use of closed network connection
	E0731 20:55:23.093550       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44434: use of closed network connection
	E0731 20:55:23.265728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44462: use of closed network connection
	E0731 20:55:23.441663       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44482: use of closed network connection
	E0731 20:55:23.608544       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44504: use of closed network connection
	E0731 20:55:23.915079       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44524: use of closed network connection
	E0731 20:55:24.087506       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44538: use of closed network connection
	E0731 20:55:24.258701       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44552: use of closed network connection
	E0731 20:55:24.428387       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8443->192.168.39.1:44564: use of closed network connection
	I0731 20:59:04.777171       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0731 20:59:04.782275       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0731 20:59:04.791281       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0731 20:59:04.791717       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0731 20:59:04.809233       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8450b5d7a0ec446cf293dfd68ba37b5edabfb5e3aaa42314933d7349cc03f7d1] <==
	I0731 21:00:43.829721       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0731 21:00:43.930045       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 21:00:43.937708       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 21:00:43.941950       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 21:00:43.942418       1 aggregator.go:165] initial CRD sync complete...
	I0731 21:00:43.942463       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 21:00:43.942481       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 21:00:43.942487       1 cache.go:39] Caches are synced for autoregister controller
	I0731 21:00:43.944205       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 21:00:43.944326       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 21:00:43.945330       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 21:00:43.950730       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 21:00:43.950840       1 policy_source.go:224] refreshing policies
	I0731 21:00:43.951097       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 21:00:43.951134       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 21:00:43.951715       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 21:00:43.965920       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 21:00:44.833253       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 21:00:46.078580       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 21:00:46.203313       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 21:00:46.219296       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 21:00:46.299488       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 21:00:46.306519       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 21:00:57.083786       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 21:00:57.119792       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [42a835a7cd718fdd1f06e7a98acd85c4b62e034b9329876d333b362d6b02a13b] <==
	I0731 20:54:57.431528       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m02\" does not exist"
	I0731 20:54:57.444344       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m02" podCIDRs=["10.244.1.0/24"]
	I0731 20:55:00.208503       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-220043-m02"
	I0731 20:55:17.027091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:55:19.299575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.431885ms"
	I0731 20:55:19.329708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.058576ms"
	I0731 20:55:19.344677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.553183ms"
	I0731 20:55:19.344911       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.836µs"
	I0731 20:55:21.412267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.757545ms"
	I0731 20:55:21.413447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.123µs"
	I0731 20:55:22.280956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.823307ms"
	I0731 20:55:22.281261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.243µs"
	I0731 20:55:52.069560       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m03\" does not exist"
	I0731 20:55:52.070007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:55:52.124451       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m03" podCIDRs=["10.244.2.0/24"]
	I0731 20:55:55.226898       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-220043-m03"
	I0731 20:56:10.440425       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m03"
	I0731 20:56:39.246026       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:56:40.239820       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:56:40.240908       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m03\" does not exist"
	I0731 20:56:40.260496       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m03" podCIDRs=["10.244.3.0/24"]
	I0731 20:56:59.084139       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 20:57:45.273453       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m03"
	I0731 20:57:45.331603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.085404ms"
	I0731 20:57:45.331837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.34µs"
	
	
	==> kube-controller-manager [bc290d47eb9a607291ec41c97fc534019e0d11602707c47ebfdbf47c6a20f8ab] <==
	I0731 21:01:25.006659       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m02" podCIDRs=["10.244.1.0/24"]
	I0731 21:01:26.882820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.378µs"
	I0731 21:01:26.907176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.011µs"
	I0731 21:01:26.916560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.601µs"
	I0731 21:01:26.929584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.473µs"
	I0731 21:01:26.947393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.297µs"
	I0731 21:01:26.954326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.589µs"
	I0731 21:01:26.957293       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.634µs"
	I0731 21:01:44.081920       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 21:01:44.105553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.923µs"
	I0731 21:01:44.119106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.451µs"
	I0731 21:01:48.145289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.612178ms"
	I0731 21:01:48.145371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.707µs"
	I0731 21:02:02.421442       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 21:02:03.694904       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 21:02:03.695868       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-220043-m03\" does not exist"
	I0731 21:02:03.716097       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-220043-m03" podCIDRs=["10.244.2.0/24"]
	I0731 21:02:22.286253       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 21:02:27.484597       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-220043-m02"
	I0731 21:03:07.382872       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.798465ms"
	I0731 21:03:07.383036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.792µs"
	I0731 21:03:17.117838       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rz5ws"
	I0731 21:03:17.145435       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rz5ws"
	I0731 21:03:17.145481       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8m9rx"
	I0731 21:03:17.169891       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8m9rx"
	
	
	==> kube-proxy [3366da9a1a3441a2f5101042186431a28710c5caad80d41f97904c6e349b8250] <==
	I0731 20:54:12.366294       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:54:12.387611       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	I0731 20:54:12.433493       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:54:12.433539       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:54:12.433556       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:54:12.439495       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:54:12.439913       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:54:12.439961       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:54:12.441865       1 config.go:192] "Starting service config controller"
	I0731 20:54:12.441971       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:54:12.442020       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:54:12.442038       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:54:12.443664       1 config.go:319] "Starting node config controller"
	I0731 20:54:12.443792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:54:12.542819       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:54:12.542974       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:54:12.544797       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f1075b8b2253eabcbdf95cbcb39519780a2c4569316f25385ac27579d5ae18e5] <==
	I0731 21:00:44.782060       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:00:44.795358       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	I0731 21:00:44.885938       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:00:44.886000       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:00:44.886023       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:00:44.891927       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:00:44.892109       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:00:44.892139       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:00:44.896305       1 config.go:319] "Starting node config controller"
	I0731 21:00:44.897268       1 config.go:192] "Starting service config controller"
	I0731 21:00:44.897353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:00:44.897459       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:00:44.897480       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:00:44.899863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:00:44.998327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:00:44.998437       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:00:44.999988       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4789555cefe125d9a5d4f17eec3fd1b0693bc9814ba4eb130eb57cb786adb815] <==
	E0731 20:53:54.335728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:53:55.186081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 20:53:55.186126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 20:53:55.223467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 20:53:55.223544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 20:53:55.415933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 20:53:55.416035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:53:55.543049       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:53:55.543212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 20:53:55.560413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:53:55.560648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:53:55.593038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:53:55.593084       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 20:53:55.602731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 20:53:55.602821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:53:55.635885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 20:53:55.635959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 20:53:55.647796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 20:53:55.647922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 20:53:55.662202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 20:53:55.662286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 20:53:55.799658       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 20:53:55.799819       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 20:53:57.618995       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 20:59:04.779654       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bea8448ffa5ac74e11afc8fc7387782a7dad2719e28b3fe1d0d681e66641a0ea] <==
	I0731 21:00:41.790400       1 serving.go:380] Generated self-signed cert in-memory
	W0731 21:00:43.868265       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:00:43.868307       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:00:43.868317       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:00:43.868324       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:00:43.911223       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:00:43.911258       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:00:43.912705       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:00:43.916924       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:00:43.916958       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:00:43.916977       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:00:44.017704       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.163000    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74dfe9b6-475f-4ff0-899a-24eaccd3540f-lib-modules\") pod \"kube-proxy-fk7mt\" (UID: \"74dfe9b6-475f-4ff0-899a-24eaccd3540f\") " pod="kube-system/kube-proxy-fk7mt"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.163190    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/096976bc-3005-4c8d-88a7-da32abefc439-lib-modules\") pod \"kindnet-dnshn\" (UID: \"096976bc-3005-4c8d-88a7-da32abefc439\") " pod="kube-system/kindnet-dnshn"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.163215    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74dfe9b6-475f-4ff0-899a-24eaccd3540f-xtables-lock\") pod \"kube-proxy-fk7mt\" (UID: \"74dfe9b6-475f-4ff0-899a-24eaccd3540f\") " pod="kube-system/kube-proxy-fk7mt"
	Jul 31 21:00:44 multinode-220043 kubelet[3082]: I0731 21:00:44.163230    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/096976bc-3005-4c8d-88a7-da32abefc439-cni-cfg\") pod \"kindnet-dnshn\" (UID: \"096976bc-3005-4c8d-88a7-da32abefc439\") " pod="kube-system/kindnet-dnshn"
	Jul 31 21:00:49 multinode-220043 kubelet[3082]: I0731 21:00:49.526461    3082 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 21:01:40 multinode-220043 kubelet[3082]: E0731 21:01:40.186675    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:01:40 multinode-220043 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:01:40 multinode-220043 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:01:40 multinode-220043 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:01:40 multinode-220043 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:02:40 multinode-220043 kubelet[3082]: E0731 21:02:40.193538    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:02:40 multinode-220043 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:02:40 multinode-220043 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:02:40 multinode-220043 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:02:40 multinode-220043 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:03:40 multinode-220043 kubelet[3082]: E0731 21:03:40.186378    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:03:40 multinode-220043 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:03:40 multinode-220043 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:03:40 multinode-220043 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:03:40 multinode-220043 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:04:40 multinode-220043 kubelet[3082]: E0731 21:04:40.187004    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:04:40 multinode-220043 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:04:40 multinode-220043 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:04:40 multinode-220043 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:04:40 multinode-220043 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:04:48.409181 1131948 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19360-1093692/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-220043 -n multinode-220043
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-220043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                    
x
+
TestPreload (180.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-758694 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0731 21:09:31.358334 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-758694 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m49.179805463s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-758694 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-758694 image pull gcr.io/k8s-minikube/busybox: (2.500581329s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-758694
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-758694: (6.645988182s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-758694 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-758694 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (59.521943024s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-758694 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-31 21:11:37.984565285 +0000 UTC m=+3739.142996411
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-758694 -n test-preload-758694
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-758694 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-758694 logs -n 25: (1.034065921s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043 sudo cat                                       | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m03_multinode-220043.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt                       | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m02:/home/docker/cp-test_multinode-220043-m03_multinode-220043-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n                                                                 | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | multinode-220043-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-220043 ssh -n multinode-220043-m02 sudo cat                                   | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	|         | /home/docker/cp-test_multinode-220043-m03_multinode-220043-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-220043 node stop m03                                                          | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:56 UTC |
	| node    | multinode-220043 node start                                                             | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:56 UTC | 31 Jul 24 20:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-220043                                                                | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:57 UTC |                     |
	| stop    | -p multinode-220043                                                                     | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:57 UTC |                     |
	| start   | -p multinode-220043                                                                     | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 20:59 UTC | 31 Jul 24 21:02 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-220043                                                                | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 21:02 UTC |                     |
	| node    | multinode-220043 node delete                                                            | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 21:02 UTC | 31 Jul 24 21:02 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-220043 stop                                                                   | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 21:02 UTC |                     |
	| start   | -p multinode-220043                                                                     | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 21:04 UTC | 31 Jul 24 21:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-220043                                                                | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 21:07 UTC |                     |
	| start   | -p multinode-220043-m02                                                                 | multinode-220043-m02 | jenkins | v1.33.1 | 31 Jul 24 21:07 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-220043-m03                                                                 | multinode-220043-m03 | jenkins | v1.33.1 | 31 Jul 24 21:07 UTC | 31 Jul 24 21:08 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-220043                                                                 | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 21:08 UTC |                     |
	| delete  | -p multinode-220043-m03                                                                 | multinode-220043-m03 | jenkins | v1.33.1 | 31 Jul 24 21:08 UTC | 31 Jul 24 21:08 UTC |
	| delete  | -p multinode-220043                                                                     | multinode-220043     | jenkins | v1.33.1 | 31 Jul 24 21:08 UTC | 31 Jul 24 21:08 UTC |
	| start   | -p test-preload-758694                                                                  | test-preload-758694  | jenkins | v1.33.1 | 31 Jul 24 21:08 UTC | 31 Jul 24 21:10 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-758694 image pull                                                          | test-preload-758694  | jenkins | v1.33.1 | 31 Jul 24 21:10 UTC | 31 Jul 24 21:10 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-758694                                                                  | test-preload-758694  | jenkins | v1.33.1 | 31 Jul 24 21:10 UTC | 31 Jul 24 21:10 UTC |
	| start   | -p test-preload-758694                                                                  | test-preload-758694  | jenkins | v1.33.1 | 31 Jul 24 21:10 UTC | 31 Jul 24 21:11 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-758694 image list                                                          | test-preload-758694  | jenkins | v1.33.1 | 31 Jul 24 21:11 UTC | 31 Jul 24 21:11 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:10:38
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:10:38.281079 1134385 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:10:38.281191 1134385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:10:38.281195 1134385 out.go:304] Setting ErrFile to fd 2...
	I0731 21:10:38.281199 1134385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:10:38.281393 1134385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:10:38.281952 1134385 out.go:298] Setting JSON to false
	I0731 21:10:38.282986 1134385 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":17589,"bootTime":1722442649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:10:38.283057 1134385 start.go:139] virtualization: kvm guest
	I0731 21:10:38.285170 1134385 out.go:177] * [test-preload-758694] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:10:38.287148 1134385 notify.go:220] Checking for updates...
	I0731 21:10:38.287166 1134385 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:10:38.288363 1134385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:10:38.289514 1134385 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:10:38.290662 1134385 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:10:38.291833 1134385 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:10:38.293068 1134385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:10:38.294562 1134385 config.go:182] Loaded profile config "test-preload-758694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 21:10:38.294985 1134385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:10:38.295058 1134385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:10:38.310382 1134385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I0731 21:10:38.310942 1134385 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:10:38.311554 1134385 main.go:141] libmachine: Using API Version  1
	I0731 21:10:38.311576 1134385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:10:38.311914 1134385 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:10:38.312124 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:10:38.314040 1134385 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 21:10:38.315285 1134385 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:10:38.315657 1134385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:10:38.315698 1134385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:10:38.331703 1134385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39343
	I0731 21:10:38.332198 1134385 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:10:38.332817 1134385 main.go:141] libmachine: Using API Version  1
	I0731 21:10:38.332837 1134385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:10:38.333158 1134385 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:10:38.333402 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:10:38.370822 1134385 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:10:38.371999 1134385 start.go:297] selected driver: kvm2
	I0731 21:10:38.372021 1134385 start.go:901] validating driver "kvm2" against &{Name:test-preload-758694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-758694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:10:38.372193 1134385 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:10:38.373353 1134385 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:10:38.373463 1134385 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:10:38.389231 1134385 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:10:38.389654 1134385 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:10:38.389726 1134385 cni.go:84] Creating CNI manager for ""
	I0731 21:10:38.389748 1134385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:10:38.389839 1134385 start.go:340] cluster config:
	{Name:test-preload-758694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-758694 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:10:38.389970 1134385 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:10:38.391866 1134385 out.go:177] * Starting "test-preload-758694" primary control-plane node in "test-preload-758694" cluster
	I0731 21:10:38.393335 1134385 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 21:10:38.417247 1134385 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 21:10:38.417281 1134385 cache.go:56] Caching tarball of preloaded images
	I0731 21:10:38.417434 1134385 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 21:10:38.419028 1134385 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0731 21:10:38.420223 1134385 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 21:10:38.449316 1134385 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 21:10:41.808806 1134385 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 21:10:41.808937 1134385 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 21:10:42.685677 1134385 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0731 21:10:42.685837 1134385 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/config.json ...
	I0731 21:10:42.686111 1134385 start.go:360] acquireMachinesLock for test-preload-758694: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:10:42.686193 1134385 start.go:364] duration metric: took 54.26µs to acquireMachinesLock for "test-preload-758694"
	I0731 21:10:42.686214 1134385 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:10:42.686223 1134385 fix.go:54] fixHost starting: 
	I0731 21:10:42.686556 1134385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:10:42.686586 1134385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:10:42.702256 1134385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I0731 21:10:42.702756 1134385 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:10:42.703358 1134385 main.go:141] libmachine: Using API Version  1
	I0731 21:10:42.703388 1134385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:10:42.703793 1134385 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:10:42.704027 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:10:42.704295 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetState
	I0731 21:10:42.706168 1134385 fix.go:112] recreateIfNeeded on test-preload-758694: state=Stopped err=<nil>
	I0731 21:10:42.706209 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	W0731 21:10:42.706401 1134385 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:10:42.708508 1134385 out.go:177] * Restarting existing kvm2 VM for "test-preload-758694" ...
	I0731 21:10:42.709720 1134385 main.go:141] libmachine: (test-preload-758694) Calling .Start
	I0731 21:10:42.709936 1134385 main.go:141] libmachine: (test-preload-758694) Ensuring networks are active...
	I0731 21:10:42.710697 1134385 main.go:141] libmachine: (test-preload-758694) Ensuring network default is active
	I0731 21:10:42.711029 1134385 main.go:141] libmachine: (test-preload-758694) Ensuring network mk-test-preload-758694 is active
	I0731 21:10:42.711330 1134385 main.go:141] libmachine: (test-preload-758694) Getting domain xml...
	I0731 21:10:42.711976 1134385 main.go:141] libmachine: (test-preload-758694) Creating domain...
	I0731 21:10:43.928944 1134385 main.go:141] libmachine: (test-preload-758694) Waiting to get IP...
	I0731 21:10:43.929828 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:43.930204 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:43.930292 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:43.930193 1134436 retry.go:31] will retry after 234.102965ms: waiting for machine to come up
	I0731 21:10:44.165729 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:44.166161 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:44.166188 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:44.166103 1134436 retry.go:31] will retry after 344.775567ms: waiting for machine to come up
	I0731 21:10:44.512904 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:44.513378 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:44.513422 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:44.513344 1134436 retry.go:31] will retry after 457.155892ms: waiting for machine to come up
	I0731 21:10:44.972084 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:44.972485 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:44.972513 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:44.972453 1134436 retry.go:31] will retry after 378.419403ms: waiting for machine to come up
	I0731 21:10:45.352078 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:45.352488 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:45.352518 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:45.352430 1134436 retry.go:31] will retry after 714.491756ms: waiting for machine to come up
	I0731 21:10:46.068210 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:46.068649 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:46.068679 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:46.068579 1134436 retry.go:31] will retry after 582.95844ms: waiting for machine to come up
	I0731 21:10:46.653522 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:46.654065 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:46.654092 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:46.653984 1134436 retry.go:31] will retry after 743.246534ms: waiting for machine to come up
	I0731 21:10:47.398917 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:47.399298 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:47.399321 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:47.399269 1134436 retry.go:31] will retry after 1.020529503s: waiting for machine to come up
	I0731 21:10:48.421845 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:48.422317 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:48.422347 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:48.422291 1134436 retry.go:31] will retry after 1.630075392s: waiting for machine to come up
	I0731 21:10:50.055340 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:50.055814 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:50.055845 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:50.055740 1134436 retry.go:31] will retry after 1.538302478s: waiting for machine to come up
	I0731 21:10:51.596649 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:51.596998 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:51.597022 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:51.596959 1134436 retry.go:31] will retry after 1.8628901s: waiting for machine to come up
	I0731 21:10:53.461709 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:53.462131 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:53.462157 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:53.462079 1134436 retry.go:31] will retry after 3.579745595s: waiting for machine to come up
	I0731 21:10:57.045316 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:57.045729 1134385 main.go:141] libmachine: (test-preload-758694) DBG | unable to find current IP address of domain test-preload-758694 in network mk-test-preload-758694
	I0731 21:10:57.045770 1134385 main.go:141] libmachine: (test-preload-758694) DBG | I0731 21:10:57.045680 1134436 retry.go:31] will retry after 2.8189972s: waiting for machine to come up
	I0731 21:10:59.867193 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:59.867783 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has current primary IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:59.867837 1134385 main.go:141] libmachine: (test-preload-758694) Found IP for machine: 192.168.39.112
	I0731 21:10:59.867864 1134385 main.go:141] libmachine: (test-preload-758694) Reserving static IP address...
	I0731 21:10:59.868304 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "test-preload-758694", mac: "52:54:00:5a:16:80", ip: "192.168.39.112"} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:10:59.868325 1134385 main.go:141] libmachine: (test-preload-758694) DBG | skip adding static IP to network mk-test-preload-758694 - found existing host DHCP lease matching {name: "test-preload-758694", mac: "52:54:00:5a:16:80", ip: "192.168.39.112"}
	I0731 21:10:59.868336 1134385 main.go:141] libmachine: (test-preload-758694) Reserved static IP address: 192.168.39.112
	I0731 21:10:59.868347 1134385 main.go:141] libmachine: (test-preload-758694) Waiting for SSH to be available...
	I0731 21:10:59.868358 1134385 main.go:141] libmachine: (test-preload-758694) DBG | Getting to WaitForSSH function...
	I0731 21:10:59.870482 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:59.870757 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:10:59.870794 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:10:59.870881 1134385 main.go:141] libmachine: (test-preload-758694) DBG | Using SSH client type: external
	I0731 21:10:59.870909 1134385 main.go:141] libmachine: (test-preload-758694) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/test-preload-758694/id_rsa (-rw-------)
	I0731 21:10:59.870945 1134385 main.go:141] libmachine: (test-preload-758694) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/test-preload-758694/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:10:59.870961 1134385 main.go:141] libmachine: (test-preload-758694) DBG | About to run SSH command:
	I0731 21:10:59.870971 1134385 main.go:141] libmachine: (test-preload-758694) DBG | exit 0
	I0731 21:11:00.000131 1134385 main.go:141] libmachine: (test-preload-758694) DBG | SSH cmd err, output: <nil>: 
	I0731 21:11:00.000504 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetConfigRaw
	I0731 21:11:00.001191 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetIP
	I0731 21:11:00.003497 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.003818 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:00.003855 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.004175 1134385 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/config.json ...
	I0731 21:11:00.004406 1134385 machine.go:94] provisionDockerMachine start ...
	I0731 21:11:00.004427 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:11:00.004686 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:00.006666 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.006999 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:00.007036 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.007169 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:00.007385 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.007577 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.007702 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:00.007875 1134385 main.go:141] libmachine: Using SSH client type: native
	I0731 21:11:00.008074 1134385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0731 21:11:00.008085 1134385 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:11:00.124597 1134385 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:11:00.124630 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetMachineName
	I0731 21:11:00.124902 1134385 buildroot.go:166] provisioning hostname "test-preload-758694"
	I0731 21:11:00.124933 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetMachineName
	I0731 21:11:00.125102 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:00.127912 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.128337 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:00.128369 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.128529 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:00.128737 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.128908 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.129065 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:00.129243 1134385 main.go:141] libmachine: Using SSH client type: native
	I0731 21:11:00.129422 1134385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0731 21:11:00.129434 1134385 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-758694 && echo "test-preload-758694" | sudo tee /etc/hostname
	I0731 21:11:00.260831 1134385 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-758694
	
	I0731 21:11:00.260861 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:00.263771 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.264158 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:00.264197 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.264402 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:00.264632 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.264794 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.264932 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:00.265102 1134385 main.go:141] libmachine: Using SSH client type: native
	I0731 21:11:00.265312 1134385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0731 21:11:00.265332 1134385 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-758694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-758694/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-758694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:11:00.390350 1134385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:11:00.390389 1134385 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:11:00.390415 1134385 buildroot.go:174] setting up certificates
	I0731 21:11:00.390429 1134385 provision.go:84] configureAuth start
	I0731 21:11:00.390443 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetMachineName
	I0731 21:11:00.390769 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetIP
	I0731 21:11:00.393813 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.394263 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:00.394303 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.394524 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:00.396555 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.396898 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:00.396933 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.397093 1134385 provision.go:143] copyHostCerts
	I0731 21:11:00.397154 1134385 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:11:00.397164 1134385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:11:00.397226 1134385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:11:00.397320 1134385 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:11:00.397327 1134385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:11:00.397350 1134385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:11:00.397401 1134385 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:11:00.397410 1134385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:11:00.397431 1134385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:11:00.397481 1134385 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.test-preload-758694 san=[127.0.0.1 192.168.39.112 localhost minikube test-preload-758694]
	I0731 21:11:00.586093 1134385 provision.go:177] copyRemoteCerts
	I0731 21:11:00.586158 1134385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:11:00.586191 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:00.588772 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.589064 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:00.589102 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.589327 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:00.589589 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.589762 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:00.589931 1134385 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/test-preload-758694/id_rsa Username:docker}
	I0731 21:11:00.678183 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:11:00.702140 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 21:11:00.724688 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:11:00.747502 1134385 provision.go:87] duration metric: took 357.059923ms to configureAuth
	I0731 21:11:00.747538 1134385 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:11:00.747740 1134385 config.go:182] Loaded profile config "test-preload-758694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 21:11:00.747868 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:00.750624 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.750960 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:00.750992 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:00.751154 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:00.751358 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.751539 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:00.751684 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:00.751925 1134385 main.go:141] libmachine: Using SSH client type: native
	I0731 21:11:00.752178 1134385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0731 21:11:00.752201 1134385 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:11:01.020113 1134385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:11:01.020148 1134385 machine.go:97] duration metric: took 1.015727039s to provisionDockerMachine
	I0731 21:11:01.020163 1134385 start.go:293] postStartSetup for "test-preload-758694" (driver="kvm2")
	I0731 21:11:01.020204 1134385 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:11:01.020234 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:11:01.020627 1134385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:11:01.020682 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:01.023648 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.023948 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:01.023981 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.024181 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:01.024424 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:01.024650 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:01.024807 1134385 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/test-preload-758694/id_rsa Username:docker}
	I0731 21:11:01.110263 1134385 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:11:01.114484 1134385 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:11:01.114515 1134385 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:11:01.114599 1134385 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:11:01.114692 1134385 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:11:01.114809 1134385 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:11:01.123949 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:11:01.147517 1134385 start.go:296] duration metric: took 127.337136ms for postStartSetup
	I0731 21:11:01.147570 1134385 fix.go:56] duration metric: took 18.461348756s for fixHost
	I0731 21:11:01.147593 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:01.150378 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.150700 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:01.150730 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.150866 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:01.151076 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:01.151261 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:01.151407 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:01.151581 1134385 main.go:141] libmachine: Using SSH client type: native
	I0731 21:11:01.151760 1134385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0731 21:11:01.151771 1134385 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:11:01.264757 1134385 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722460261.236525249
	
	I0731 21:11:01.264788 1134385 fix.go:216] guest clock: 1722460261.236525249
	I0731 21:11:01.264796 1134385 fix.go:229] Guest: 2024-07-31 21:11:01.236525249 +0000 UTC Remote: 2024-07-31 21:11:01.147574069 +0000 UTC m=+22.902693834 (delta=88.95118ms)
	I0731 21:11:01.264819 1134385 fix.go:200] guest clock delta is within tolerance: 88.95118ms
	I0731 21:11:01.264825 1134385 start.go:83] releasing machines lock for "test-preload-758694", held for 18.578621222s
	I0731 21:11:01.264846 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:11:01.265172 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetIP
	I0731 21:11:01.267865 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.268227 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:01.268249 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.268438 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:11:01.268988 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:11:01.269196 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:11:01.269266 1134385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:11:01.269311 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:01.269447 1134385 ssh_runner.go:195] Run: cat /version.json
	I0731 21:11:01.269472 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:01.271999 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.272366 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:01.272394 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.272414 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.272525 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:01.272738 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:01.272872 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:01.272901 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:01.272973 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:01.273083 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:01.273177 1134385 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/test-preload-758694/id_rsa Username:docker}
	I0731 21:11:01.273212 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:01.273349 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:01.273504 1134385 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/test-preload-758694/id_rsa Username:docker}
	I0731 21:11:01.376956 1134385 ssh_runner.go:195] Run: systemctl --version
	I0731 21:11:01.382767 1134385 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:11:01.525521 1134385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:11:01.531522 1134385 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:11:01.531610 1134385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:11:01.548404 1134385 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:11:01.548437 1134385 start.go:495] detecting cgroup driver to use...
	I0731 21:11:01.548509 1134385 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:11:01.564979 1134385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:11:01.580182 1134385 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:11:01.580247 1134385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:11:01.595252 1134385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:11:01.610036 1134385 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:11:01.730538 1134385 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:11:01.894646 1134385 docker.go:233] disabling docker service ...
	I0731 21:11:01.894713 1134385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:11:01.908987 1134385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:11:01.921947 1134385 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:11:02.043763 1134385 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:11:02.152679 1134385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:11:02.166436 1134385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:11:02.184427 1134385 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0731 21:11:02.184507 1134385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:11:02.194879 1134385 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:11:02.194960 1134385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:11:02.205446 1134385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:11:02.215947 1134385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:11:02.226347 1134385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:11:02.236992 1134385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:11:02.247570 1134385 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:11:02.264695 1134385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:11:02.275194 1134385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:11:02.284831 1134385 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:11:02.284895 1134385 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:11:02.297387 1134385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:11:02.307138 1134385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:11:02.415373 1134385 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:11:02.549089 1134385 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:11:02.549183 1134385 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:11:02.553916 1134385 start.go:563] Will wait 60s for crictl version
	I0731 21:11:02.553988 1134385 ssh_runner.go:195] Run: which crictl
	I0731 21:11:02.557561 1134385 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:11:02.602078 1134385 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:11:02.602183 1134385 ssh_runner.go:195] Run: crio --version
	I0731 21:11:02.631006 1134385 ssh_runner.go:195] Run: crio --version
	I0731 21:11:02.663704 1134385 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0731 21:11:02.665266 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetIP
	I0731 21:11:02.668048 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:02.668498 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:02.668528 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:02.668840 1134385 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:11:02.672961 1134385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:11:02.684706 1134385 kubeadm.go:883] updating cluster {Name:test-preload-758694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-758694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:11:02.684829 1134385 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 21:11:02.684888 1134385 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:11:02.719353 1134385 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 21:11:02.719420 1134385 ssh_runner.go:195] Run: which lz4
	I0731 21:11:02.723321 1134385 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:11:02.727237 1134385 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:11:02.727270 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0731 21:11:04.169011 1134385 crio.go:462] duration metric: took 1.44572703s to copy over tarball
	I0731 21:11:04.169088 1134385 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:11:06.574167 1134385 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.405051073s)
	I0731 21:11:06.574204 1134385 crio.go:469] duration metric: took 2.405158747s to extract the tarball
	I0731 21:11:06.574215 1134385 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:11:06.614743 1134385 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:11:06.656833 1134385 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 21:11:06.656863 1134385 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:11:06.656930 1134385 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 21:11:06.656960 1134385 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 21:11:06.656930 1134385 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:11:06.656991 1134385 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 21:11:06.657031 1134385 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 21:11:06.657068 1134385 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 21:11:06.657031 1134385 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 21:11:06.657006 1134385 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 21:11:06.658340 1134385 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:11:06.658433 1134385 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 21:11:06.658494 1134385 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 21:11:06.658433 1134385 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 21:11:06.658497 1134385 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 21:11:06.658433 1134385 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 21:11:06.658433 1134385 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 21:11:06.658494 1134385 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 21:11:06.797961 1134385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0731 21:11:06.799779 1134385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 21:11:06.805116 1134385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0731 21:11:06.812721 1134385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 21:11:06.814638 1134385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 21:11:06.824504 1134385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 21:11:06.863311 1134385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0731 21:11:06.872080 1134385 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0731 21:11:06.872148 1134385 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 21:11:06.872207 1134385 ssh_runner.go:195] Run: which crictl
	I0731 21:11:06.941557 1134385 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0731 21:11:06.941622 1134385 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0731 21:11:06.941678 1134385 ssh_runner.go:195] Run: which crictl
	I0731 21:11:06.971667 1134385 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0731 21:11:06.971695 1134385 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0731 21:11:06.971723 1134385 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 21:11:06.971725 1134385 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 21:11:06.971777 1134385 ssh_runner.go:195] Run: which crictl
	I0731 21:11:06.971777 1134385 ssh_runner.go:195] Run: which crictl
	I0731 21:11:06.977079 1134385 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0731 21:11:06.977104 1134385 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0731 21:11:06.977129 1134385 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 21:11:06.977129 1134385 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 21:11:06.977181 1134385 ssh_runner.go:195] Run: which crictl
	I0731 21:11:06.977181 1134385 ssh_runner.go:195] Run: which crictl
	I0731 21:11:06.978326 1134385 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0731 21:11:06.978359 1134385 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 21:11:06.978365 1134385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0731 21:11:06.978385 1134385 ssh_runner.go:195] Run: which crictl
	I0731 21:11:06.978465 1134385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 21:11:06.979245 1134385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 21:11:07.043874 1134385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0731 21:11:07.043954 1134385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 21:11:07.043992 1134385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 21:11:07.044031 1134385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 21:11:07.044036 1134385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0731 21:11:07.044117 1134385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0731 21:11:07.044139 1134385 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 21:11:07.044206 1134385 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 21:11:07.055842 1134385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 21:11:07.055948 1134385 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 21:11:07.120713 1134385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 21:11:07.120828 1134385 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 21:11:07.120838 1134385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 21:11:07.120930 1134385 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 21:11:07.141196 1134385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 21:11:07.141244 1134385 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0731 21:11:07.141261 1134385 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 21:11:07.141297 1134385 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 21:11:07.141311 1134385 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0731 21:11:07.141207 1134385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0731 21:11:07.141365 1134385 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0731 21:11:07.141385 1134385 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0731 21:11:07.141394 1134385 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0731 21:11:07.141418 1134385 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0731 21:11:07.141441 1134385 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 21:11:07.392745 1134385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:11:09.801017 1134385 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (2.659697041s)
	I0731 21:11:09.801056 1134385 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0731 21:11:09.801053 1134385 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.65971904s)
	I0731 21:11:09.801076 1134385 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0731 21:11:09.801092 1134385 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (2.659628552s)
	I0731 21:11:09.801125 1134385 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0731 21:11:09.801102 1134385 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 21:11:09.801144 1134385 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.408361029s)
	I0731 21:11:09.801189 1134385 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 21:11:10.547137 1134385 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0731 21:11:10.547188 1134385 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 21:11:10.547245 1134385 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 21:11:11.292295 1134385 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0731 21:11:11.292344 1134385 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 21:11:11.292403 1134385 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 21:11:11.738913 1134385 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0731 21:11:11.738970 1134385 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 21:11:11.739032 1134385 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 21:11:12.586571 1134385 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0731 21:11:12.586626 1134385 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 21:11:12.586726 1134385 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0731 21:11:13.031939 1134385 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 21:11:13.031990 1134385 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 21:11:13.032035 1134385 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0731 21:11:15.188550 1134385 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.156470962s)
	I0731 21:11:15.188582 1134385 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 21:11:15.188616 1134385 cache_images.go:123] Successfully loaded all cached images
	I0731 21:11:15.188623 1134385 cache_images.go:92] duration metric: took 8.531746072s to LoadCachedImages
	I0731 21:11:15.188637 1134385 kubeadm.go:934] updating node { 192.168.39.112 8443 v1.24.4 crio true true} ...
	I0731 21:11:15.188813 1134385 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-758694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-758694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:11:15.188889 1134385 ssh_runner.go:195] Run: crio config
	I0731 21:11:15.241244 1134385 cni.go:84] Creating CNI manager for ""
	I0731 21:11:15.241278 1134385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:11:15.241297 1134385 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:11:15.241318 1134385 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.112 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-758694 NodeName:test-preload-758694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:11:15.241458 1134385 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-758694"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:11:15.241526 1134385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0731 21:11:15.251200 1134385 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:11:15.251287 1134385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:11:15.260485 1134385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0731 21:11:15.276709 1134385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:11:15.292964 1134385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0731 21:11:15.310115 1134385 ssh_runner.go:195] Run: grep 192.168.39.112	control-plane.minikube.internal$ /etc/hosts
	I0731 21:11:15.314036 1134385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:11:15.325741 1134385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:11:15.443067 1134385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:11:15.459350 1134385 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694 for IP: 192.168.39.112
	I0731 21:11:15.459382 1134385 certs.go:194] generating shared ca certs ...
	I0731 21:11:15.459406 1134385 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:11:15.459572 1134385 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:11:15.459613 1134385 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:11:15.459621 1134385 certs.go:256] generating profile certs ...
	I0731 21:11:15.459722 1134385 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/client.key
	I0731 21:11:15.459778 1134385 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/apiserver.key.858825d0
	I0731 21:11:15.459834 1134385 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/proxy-client.key
	I0731 21:11:15.459990 1134385 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:11:15.460022 1134385 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:11:15.460029 1134385 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:11:15.460054 1134385 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:11:15.460078 1134385 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:11:15.460114 1134385 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:11:15.460160 1134385 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:11:15.460842 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:11:15.502355 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:11:15.541106 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:11:15.577881 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:11:15.611429 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 21:11:15.640220 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:11:15.672638 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:11:15.697108 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:11:15.721335 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:11:15.745215 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:11:15.768950 1134385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:11:15.793068 1134385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:11:15.810171 1134385 ssh_runner.go:195] Run: openssl version
	I0731 21:11:15.815990 1134385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:11:15.826659 1134385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:11:15.831348 1134385 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:11:15.831422 1134385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:11:15.837305 1134385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:11:15.848001 1134385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:11:15.858828 1134385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:11:15.863577 1134385 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:11:15.863662 1134385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:11:15.869345 1134385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:11:15.880032 1134385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:11:15.890769 1134385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:11:15.895493 1134385 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:11:15.895578 1134385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:11:15.901353 1134385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:11:15.911973 1134385 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:11:15.916780 1134385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:11:15.923070 1134385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:11:15.929329 1134385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:11:15.935533 1134385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:11:15.941400 1134385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:11:15.947267 1134385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:11:15.953189 1134385 kubeadm.go:392] StartCluster: {Name:test-preload-758694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-758694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:11:15.953314 1134385 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:11:15.953397 1134385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:11:15.989613 1134385 cri.go:89] found id: ""
	I0731 21:11:15.989687 1134385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:11:15.999621 1134385 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:11:15.999646 1134385 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:11:15.999722 1134385 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:11:16.009600 1134385 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:11:16.010098 1134385 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-758694" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:11:16.010223 1134385 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-758694" cluster setting kubeconfig missing "test-preload-758694" context setting]
	I0731 21:11:16.010527 1134385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:11:16.011222 1134385 kapi.go:59] client config for test-preload-758694: &rest.Config{Host:"https://192.168.39.112:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/client.crt", KeyFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/client.key", CAFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 21:11:16.011925 1134385 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:11:16.021357 1134385 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.112
	I0731 21:11:16.021412 1134385 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:11:16.021426 1134385 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:11:16.021486 1134385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:11:16.057773 1134385 cri.go:89] found id: ""
	I0731 21:11:16.057889 1134385 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:11:16.074121 1134385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:11:16.083522 1134385 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:11:16.083544 1134385 kubeadm.go:157] found existing configuration files:
	
	I0731 21:11:16.083591 1134385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:11:16.092648 1134385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:11:16.092706 1134385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:11:16.102293 1134385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:11:16.111428 1134385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:11:16.111532 1134385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:11:16.121049 1134385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:11:16.130110 1134385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:11:16.130211 1134385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:11:16.140057 1134385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:11:16.149423 1134385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:11:16.149505 1134385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:11:16.159128 1134385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:11:16.169047 1134385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:11:16.253751 1134385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:11:16.980019 1134385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:11:17.253386 1134385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:11:17.315442 1134385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:11:17.409721 1134385 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:11:17.409837 1134385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:11:17.909897 1134385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:11:18.410423 1134385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:11:18.427282 1134385 api_server.go:72] duration metric: took 1.0175575s to wait for apiserver process to appear ...
	I0731 21:11:18.427317 1134385 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:11:18.427343 1134385 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0731 21:11:18.427871 1134385 api_server.go:269] stopped: https://192.168.39.112:8443/healthz: Get "https://192.168.39.112:8443/healthz": dial tcp 192.168.39.112:8443: connect: connection refused
	I0731 21:11:18.927718 1134385 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0731 21:11:18.928393 1134385 api_server.go:269] stopped: https://192.168.39.112:8443/healthz: Get "https://192.168.39.112:8443/healthz": dial tcp 192.168.39.112:8443: connect: connection refused
	I0731 21:11:19.427943 1134385 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0731 21:11:22.618920 1134385 api_server.go:279] https://192.168.39.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:11:22.618954 1134385 api_server.go:103] status: https://192.168.39.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:11:22.618978 1134385 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0731 21:11:22.643600 1134385 api_server.go:279] https://192.168.39.112:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:11:22.643656 1134385 api_server.go:103] status: https://192.168.39.112:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:11:22.927687 1134385 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0731 21:11:22.933370 1134385 api_server.go:279] https://192.168.39.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:11:22.933406 1134385 api_server.go:103] status: https://192.168.39.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:11:23.428410 1134385 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0731 21:11:23.434739 1134385 api_server.go:279] https://192.168.39.112:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:11:23.434847 1134385 api_server.go:103] status: https://192.168.39.112:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:11:23.928433 1134385 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0731 21:11:23.934789 1134385 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0731 21:11:23.942292 1134385 api_server.go:141] control plane version: v1.24.4
	I0731 21:11:23.942323 1134385 api_server.go:131] duration metric: took 5.51499932s to wait for apiserver health ...
	I0731 21:11:23.942333 1134385 cni.go:84] Creating CNI manager for ""
	I0731 21:11:23.942340 1134385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:11:23.943976 1134385 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:11:23.945273 1134385 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:11:23.956289 1134385 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:11:23.973135 1134385 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:11:23.985428 1134385 system_pods.go:59] 8 kube-system pods found
	I0731 21:11:23.985484 1134385 system_pods.go:61] "coredns-6d4b75cb6d-4ttsq" [e9110150-4189-4445-b98b-cd02e1d6eca4] Running
	I0731 21:11:23.985498 1134385 system_pods.go:61] "coredns-6d4b75cb6d-mwrq7" [baf0c755-df95-481f-9338-68611f43c185] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:11:23.985506 1134385 system_pods.go:61] "etcd-test-preload-758694" [d73ceeae-e4b6-4c17-abc3-c5ba07d29af2] Running
	I0731 21:11:23.985525 1134385 system_pods.go:61] "kube-apiserver-test-preload-758694" [cd493763-5462-4e2a-91ed-de935322de41] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:11:23.985531 1134385 system_pods.go:61] "kube-controller-manager-test-preload-758694" [bdcbf83a-1ca9-4d59-9526-fecec7e3a030] Running
	I0731 21:11:23.985539 1134385 system_pods.go:61] "kube-proxy-gmnzg" [b683bac0-c8df-47ff-af69-9ec46451ff8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:11:23.985544 1134385 system_pods.go:61] "kube-scheduler-test-preload-758694" [169b2536-ebe3-44ee-a10a-12b1dd278df8] Running
	I0731 21:11:23.985552 1134385 system_pods.go:61] "storage-provisioner" [e2e803a7-f33c-4ce8-94db-802e64802762] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:11:23.985561 1134385 system_pods.go:74] duration metric: took 12.402563ms to wait for pod list to return data ...
	I0731 21:11:23.985575 1134385 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:11:23.990711 1134385 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:11:23.990749 1134385 node_conditions.go:123] node cpu capacity is 2
	I0731 21:11:23.990762 1134385 node_conditions.go:105] duration metric: took 5.180814ms to run NodePressure ...
	I0731 21:11:23.990795 1134385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:11:24.237876 1134385 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:11:24.244918 1134385 kubeadm.go:739] kubelet initialised
	I0731 21:11:24.244948 1134385 kubeadm.go:740] duration metric: took 7.039399ms waiting for restarted kubelet to initialise ...
	I0731 21:11:24.244959 1134385 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:11:24.254395 1134385 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-4ttsq" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:24.264087 1134385 pod_ready.go:97] node "test-preload-758694" hosting pod "coredns-6d4b75cb6d-4ttsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.264132 1134385 pod_ready.go:81] duration metric: took 9.70836ms for pod "coredns-6d4b75cb6d-4ttsq" in "kube-system" namespace to be "Ready" ...
	E0731 21:11:24.264144 1134385 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-758694" hosting pod "coredns-6d4b75cb6d-4ttsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.264154 1134385 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mwrq7" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:24.273068 1134385 pod_ready.go:97] node "test-preload-758694" hosting pod "coredns-6d4b75cb6d-mwrq7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.273095 1134385 pod_ready.go:81] duration metric: took 8.929471ms for pod "coredns-6d4b75cb6d-mwrq7" in "kube-system" namespace to be "Ready" ...
	E0731 21:11:24.273106 1134385 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-758694" hosting pod "coredns-6d4b75cb6d-mwrq7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.273113 1134385 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:24.280240 1134385 pod_ready.go:97] node "test-preload-758694" hosting pod "etcd-test-preload-758694" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.280267 1134385 pod_ready.go:81] duration metric: took 7.14365ms for pod "etcd-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	E0731 21:11:24.280276 1134385 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-758694" hosting pod "etcd-test-preload-758694" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.280284 1134385 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:24.377232 1134385 pod_ready.go:97] node "test-preload-758694" hosting pod "kube-apiserver-test-preload-758694" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.377263 1134385 pod_ready.go:81] duration metric: took 96.966277ms for pod "kube-apiserver-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	E0731 21:11:24.377274 1134385 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-758694" hosting pod "kube-apiserver-test-preload-758694" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.377281 1134385 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:24.777929 1134385 pod_ready.go:97] node "test-preload-758694" hosting pod "kube-controller-manager-test-preload-758694" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.777972 1134385 pod_ready.go:81] duration metric: took 400.680197ms for pod "kube-controller-manager-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	E0731 21:11:24.777986 1134385 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-758694" hosting pod "kube-controller-manager-test-preload-758694" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:24.777994 1134385 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gmnzg" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:25.177839 1134385 pod_ready.go:97] node "test-preload-758694" hosting pod "kube-proxy-gmnzg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:25.177871 1134385 pod_ready.go:81] duration metric: took 399.865071ms for pod "kube-proxy-gmnzg" in "kube-system" namespace to be "Ready" ...
	E0731 21:11:25.177881 1134385 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-758694" hosting pod "kube-proxy-gmnzg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:25.177889 1134385 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:25.577501 1134385 pod_ready.go:97] node "test-preload-758694" hosting pod "kube-scheduler-test-preload-758694" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:25.577535 1134385 pod_ready.go:81] duration metric: took 399.638973ms for pod "kube-scheduler-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	E0731 21:11:25.577546 1134385 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-758694" hosting pod "kube-scheduler-test-preload-758694" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:25.577553 1134385 pod_ready.go:38] duration metric: took 1.332583647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:11:25.577572 1134385 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:11:25.588630 1134385 ops.go:34] apiserver oom_adj: -16
	I0731 21:11:25.588654 1134385 kubeadm.go:597] duration metric: took 9.589001611s to restartPrimaryControlPlane
	I0731 21:11:25.588663 1134385 kubeadm.go:394] duration metric: took 9.635486188s to StartCluster
	I0731 21:11:25.588685 1134385 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:11:25.588759 1134385 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:11:25.589401 1134385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:11:25.589632 1134385 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:11:25.589673 1134385 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:11:25.589788 1134385 addons.go:69] Setting storage-provisioner=true in profile "test-preload-758694"
	I0731 21:11:25.589809 1134385 addons.go:69] Setting default-storageclass=true in profile "test-preload-758694"
	I0731 21:11:25.589828 1134385 addons.go:234] Setting addon storage-provisioner=true in "test-preload-758694"
	I0731 21:11:25.589837 1134385 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-758694"
	W0731 21:11:25.589847 1134385 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:11:25.589827 1134385 config.go:182] Loaded profile config "test-preload-758694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 21:11:25.589900 1134385 host.go:66] Checking if "test-preload-758694" exists ...
	I0731 21:11:25.590157 1134385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:11:25.590189 1134385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:11:25.590314 1134385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:11:25.590348 1134385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:11:25.591994 1134385 out.go:177] * Verifying Kubernetes components...
	I0731 21:11:25.593149 1134385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:11:25.605762 1134385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0731 21:11:25.605927 1134385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36067
	I0731 21:11:25.606324 1134385 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:11:25.606332 1134385 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:11:25.606827 1134385 main.go:141] libmachine: Using API Version  1
	I0731 21:11:25.606845 1134385 main.go:141] libmachine: Using API Version  1
	I0731 21:11:25.606848 1134385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:11:25.606863 1134385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:11:25.607140 1134385 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:11:25.607337 1134385 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:11:25.607513 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetState
	I0731 21:11:25.607655 1134385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:11:25.607680 1134385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:11:25.609905 1134385 kapi.go:59] client config for test-preload-758694: &rest.Config{Host:"https://192.168.39.112:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/client.crt", KeyFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/test-preload-758694/client.key", CAFile:"/home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 21:11:25.610186 1134385 addons.go:234] Setting addon default-storageclass=true in "test-preload-758694"
	W0731 21:11:25.610201 1134385 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:11:25.610225 1134385 host.go:66] Checking if "test-preload-758694" exists ...
	I0731 21:11:25.610472 1134385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:11:25.610497 1134385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:11:25.625784 1134385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0731 21:11:25.626272 1134385 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:11:25.626809 1134385 main.go:141] libmachine: Using API Version  1
	I0731 21:11:25.626835 1134385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:11:25.627148 1134385 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:11:25.627584 1134385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0731 21:11:25.627759 1134385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:11:25.627806 1134385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:11:25.628135 1134385 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:11:25.628647 1134385 main.go:141] libmachine: Using API Version  1
	I0731 21:11:25.628667 1134385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:11:25.628993 1134385 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:11:25.629209 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetState
	I0731 21:11:25.630927 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:11:25.633159 1134385 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:11:25.634689 1134385 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:11:25.634703 1134385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:11:25.634722 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:25.637925 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:25.638411 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:25.638445 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:25.638670 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:25.638841 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:25.638969 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:25.639163 1134385 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/test-preload-758694/id_rsa Username:docker}
	I0731 21:11:25.645820 1134385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0731 21:11:25.646235 1134385 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:11:25.646701 1134385 main.go:141] libmachine: Using API Version  1
	I0731 21:11:25.646751 1134385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:11:25.647089 1134385 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:11:25.647307 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetState
	I0731 21:11:25.648886 1134385 main.go:141] libmachine: (test-preload-758694) Calling .DriverName
	I0731 21:11:25.649117 1134385 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:11:25.649134 1134385 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:11:25.649155 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHHostname
	I0731 21:11:25.652075 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:25.652509 1134385 main.go:141] libmachine: (test-preload-758694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:80", ip: ""} in network mk-test-preload-758694: {Iface:virbr1 ExpiryTime:2024-07-31 22:10:53 +0000 UTC Type:0 Mac:52:54:00:5a:16:80 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:test-preload-758694 Clientid:01:52:54:00:5a:16:80}
	I0731 21:11:25.652528 1134385 main.go:141] libmachine: (test-preload-758694) DBG | domain test-preload-758694 has defined IP address 192.168.39.112 and MAC address 52:54:00:5a:16:80 in network mk-test-preload-758694
	I0731 21:11:25.652700 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHPort
	I0731 21:11:25.652939 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHKeyPath
	I0731 21:11:25.653084 1134385 main.go:141] libmachine: (test-preload-758694) Calling .GetSSHUsername
	I0731 21:11:25.653241 1134385 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/test-preload-758694/id_rsa Username:docker}
	I0731 21:11:25.754157 1134385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:11:25.768943 1134385 node_ready.go:35] waiting up to 6m0s for node "test-preload-758694" to be "Ready" ...
	I0731 21:11:25.890478 1134385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:11:25.893598 1134385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:11:26.784809 1134385 main.go:141] libmachine: Making call to close driver server
	I0731 21:11:26.784842 1134385 main.go:141] libmachine: (test-preload-758694) Calling .Close
	I0731 21:11:26.785134 1134385 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:11:26.785155 1134385 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:11:26.785153 1134385 main.go:141] libmachine: (test-preload-758694) DBG | Closing plugin on server side
	I0731 21:11:26.785164 1134385 main.go:141] libmachine: Making call to close driver server
	I0731 21:11:26.785172 1134385 main.go:141] libmachine: (test-preload-758694) Calling .Close
	I0731 21:11:26.785396 1134385 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:11:26.785410 1134385 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:11:26.785436 1134385 main.go:141] libmachine: (test-preload-758694) DBG | Closing plugin on server side
	I0731 21:11:26.792358 1134385 main.go:141] libmachine: Making call to close driver server
	I0731 21:11:26.792381 1134385 main.go:141] libmachine: (test-preload-758694) Calling .Close
	I0731 21:11:26.792697 1134385 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:11:26.792717 1134385 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:11:26.792696 1134385 main.go:141] libmachine: (test-preload-758694) DBG | Closing plugin on server side
	I0731 21:11:26.795311 1134385 main.go:141] libmachine: Making call to close driver server
	I0731 21:11:26.795327 1134385 main.go:141] libmachine: (test-preload-758694) Calling .Close
	I0731 21:11:26.795582 1134385 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:11:26.795602 1134385 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:11:26.795613 1134385 main.go:141] libmachine: Making call to close driver server
	I0731 21:11:26.795623 1134385 main.go:141] libmachine: (test-preload-758694) Calling .Close
	I0731 21:11:26.795626 1134385 main.go:141] libmachine: (test-preload-758694) DBG | Closing plugin on server side
	I0731 21:11:26.795847 1134385 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:11:26.795863 1134385 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:11:26.798435 1134385 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0731 21:11:26.799505 1134385 addons.go:510] duration metric: took 1.209836807s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0731 21:11:27.772785 1134385 node_ready.go:53] node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:29.773346 1134385 node_ready.go:53] node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:32.273395 1134385 node_ready.go:53] node "test-preload-758694" has status "Ready":"False"
	I0731 21:11:33.273248 1134385 node_ready.go:49] node "test-preload-758694" has status "Ready":"True"
	I0731 21:11:33.273276 1134385 node_ready.go:38] duration metric: took 7.504291927s for node "test-preload-758694" to be "Ready" ...
	I0731 21:11:33.273290 1134385 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:11:33.277877 1134385 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4ttsq" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.283311 1134385 pod_ready.go:92] pod "coredns-6d4b75cb6d-4ttsq" in "kube-system" namespace has status "Ready":"True"
	I0731 21:11:33.283336 1134385 pod_ready.go:81] duration metric: took 5.432167ms for pod "coredns-6d4b75cb6d-4ttsq" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.283346 1134385 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.288354 1134385 pod_ready.go:92] pod "etcd-test-preload-758694" in "kube-system" namespace has status "Ready":"True"
	I0731 21:11:33.288383 1134385 pod_ready.go:81] duration metric: took 5.029938ms for pod "etcd-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.288395 1134385 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.293415 1134385 pod_ready.go:92] pod "kube-apiserver-test-preload-758694" in "kube-system" namespace has status "Ready":"True"
	I0731 21:11:33.293444 1134385 pod_ready.go:81] duration metric: took 5.039444ms for pod "kube-apiserver-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.293454 1134385 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.298578 1134385 pod_ready.go:92] pod "kube-controller-manager-test-preload-758694" in "kube-system" namespace has status "Ready":"True"
	I0731 21:11:33.298616 1134385 pod_ready.go:81] duration metric: took 5.154848ms for pod "kube-controller-manager-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.298629 1134385 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gmnzg" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.674713 1134385 pod_ready.go:92] pod "kube-proxy-gmnzg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:11:33.674754 1134385 pod_ready.go:81] duration metric: took 376.106752ms for pod "kube-proxy-gmnzg" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:33.674769 1134385 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:35.681025 1134385 pod_ready.go:102] pod "kube-scheduler-test-preload-758694" in "kube-system" namespace has status "Ready":"False"
	I0731 21:11:37.185820 1134385 pod_ready.go:92] pod "kube-scheduler-test-preload-758694" in "kube-system" namespace has status "Ready":"True"
	I0731 21:11:37.185848 1134385 pod_ready.go:81] duration metric: took 3.511070963s for pod "kube-scheduler-test-preload-758694" in "kube-system" namespace to be "Ready" ...
	I0731 21:11:37.185859 1134385 pod_ready.go:38] duration metric: took 3.912558885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:11:37.185877 1134385 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:11:37.185940 1134385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:11:37.201306 1134385 api_server.go:72] duration metric: took 11.61162895s to wait for apiserver process to appear ...
	I0731 21:11:37.201340 1134385 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:11:37.201363 1134385 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0731 21:11:37.206841 1134385 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0731 21:11:37.207971 1134385 api_server.go:141] control plane version: v1.24.4
	I0731 21:11:37.207998 1134385 api_server.go:131] duration metric: took 6.651834ms to wait for apiserver health ...
	I0731 21:11:37.208007 1134385 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:11:37.213318 1134385 system_pods.go:59] 7 kube-system pods found
	I0731 21:11:37.213346 1134385 system_pods.go:61] "coredns-6d4b75cb6d-4ttsq" [e9110150-4189-4445-b98b-cd02e1d6eca4] Running
	I0731 21:11:37.213351 1134385 system_pods.go:61] "etcd-test-preload-758694" [d73ceeae-e4b6-4c17-abc3-c5ba07d29af2] Running
	I0731 21:11:37.213355 1134385 system_pods.go:61] "kube-apiserver-test-preload-758694" [cd493763-5462-4e2a-91ed-de935322de41] Running
	I0731 21:11:37.213359 1134385 system_pods.go:61] "kube-controller-manager-test-preload-758694" [bdcbf83a-1ca9-4d59-9526-fecec7e3a030] Running
	I0731 21:11:37.213362 1134385 system_pods.go:61] "kube-proxy-gmnzg" [b683bac0-c8df-47ff-af69-9ec46451ff8d] Running
	I0731 21:11:37.213365 1134385 system_pods.go:61] "kube-scheduler-test-preload-758694" [169b2536-ebe3-44ee-a10a-12b1dd278df8] Running
	I0731 21:11:37.213371 1134385 system_pods.go:61] "storage-provisioner" [e2e803a7-f33c-4ce8-94db-802e64802762] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:11:37.213384 1134385 system_pods.go:74] duration metric: took 5.37221ms to wait for pod list to return data ...
	I0731 21:11:37.213395 1134385 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:11:37.273286 1134385 default_sa.go:45] found service account: "default"
	I0731 21:11:37.273316 1134385 default_sa.go:55] duration metric: took 59.913593ms for default service account to be created ...
	I0731 21:11:37.273338 1134385 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:11:37.476399 1134385 system_pods.go:86] 7 kube-system pods found
	I0731 21:11:37.476430 1134385 system_pods.go:89] "coredns-6d4b75cb6d-4ttsq" [e9110150-4189-4445-b98b-cd02e1d6eca4] Running
	I0731 21:11:37.476436 1134385 system_pods.go:89] "etcd-test-preload-758694" [d73ceeae-e4b6-4c17-abc3-c5ba07d29af2] Running
	I0731 21:11:37.476440 1134385 system_pods.go:89] "kube-apiserver-test-preload-758694" [cd493763-5462-4e2a-91ed-de935322de41] Running
	I0731 21:11:37.476445 1134385 system_pods.go:89] "kube-controller-manager-test-preload-758694" [bdcbf83a-1ca9-4d59-9526-fecec7e3a030] Running
	I0731 21:11:37.476449 1134385 system_pods.go:89] "kube-proxy-gmnzg" [b683bac0-c8df-47ff-af69-9ec46451ff8d] Running
	I0731 21:11:37.476453 1134385 system_pods.go:89] "kube-scheduler-test-preload-758694" [169b2536-ebe3-44ee-a10a-12b1dd278df8] Running
	I0731 21:11:37.476459 1134385 system_pods.go:89] "storage-provisioner" [e2e803a7-f33c-4ce8-94db-802e64802762] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:11:37.476466 1134385 system_pods.go:126] duration metric: took 203.121748ms to wait for k8s-apps to be running ...
	I0731 21:11:37.476480 1134385 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:11:37.476539 1134385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:11:37.497092 1134385 system_svc.go:56] duration metric: took 20.599179ms WaitForService to wait for kubelet
	I0731 21:11:37.497132 1134385 kubeadm.go:582] duration metric: took 11.907462243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:11:37.497159 1134385 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:11:37.673950 1134385 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:11:37.673980 1134385 node_conditions.go:123] node cpu capacity is 2
	I0731 21:11:37.673993 1134385 node_conditions.go:105] duration metric: took 176.827838ms to run NodePressure ...
	I0731 21:11:37.674007 1134385 start.go:241] waiting for startup goroutines ...
	I0731 21:11:37.674016 1134385 start.go:246] waiting for cluster config update ...
	I0731 21:11:37.674030 1134385 start.go:255] writing updated cluster config ...
	I0731 21:11:37.674330 1134385 ssh_runner.go:195] Run: rm -f paused
	I0731 21:11:37.723760 1134385 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0731 21:11:37.725757 1134385 out.go:177] 
	W0731 21:11:37.727002 1134385 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0731 21:11:37.728379 1134385 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0731 21:11:37.729653 1134385 out.go:177] * Done! kubectl is now configured to use "test-preload-758694" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.631598311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460298631573386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7fb8eb14-3a4d-431f-8969-9337fae5ed20 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.632141735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37303f08-a4f1-4247-b919-6ea3a6bfe726 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.632195834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37303f08-a4f1-4247-b919-6ea3a6bfe726 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.632422895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7722b4b65198bb371ac49598e0848c9ff6f8a792ceeea5083ab255cc7fbe7552,PodSandboxId:30525fd5497131c81a2f7e126babd6128fa6f8c836159bc6e9d97a7a87a0059e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722460291581965504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-4ttsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9110150-4189-4445-b98b-cd02e1d6eca4,},Annotations:map[string]string{io.kubernetes.container.hash: 262f2824,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042ed400e997c514677f25b7a3f860cf036e7c6189b74e050a01beccfe7cd600,PodSandboxId:1ba6cf2b7c5337e8574c5421cd4791e17d8e742249a788d446f98d78e01fd6f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722460284539749157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e2e803a7-f33c-4ce8-94db-802e64802762,},Annotations:map[string]string{io.kubernetes.container.hash: d168be6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a2907d945aa2171b94afbda09d90f71516e94d23466770e563b8362682dcad,PodSandboxId:aad0d05704ea73b28286e933e5f47be85be886fedd81cf90079c567f4697d158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722460284351790608,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gmnzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b683
bac0-c8df-47ff-af69-9ec46451ff8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f68c910,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8fc3a6fb9f463b857a9b3a353f166ea690846271dd0f1120b242c124dc83502,PodSandboxId:2506b5500a942fd883c9d1f48d71be6956786611faa416617ae457daa11126c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722460278097155479,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dad672daa5f3be5fccc3ba07a9c4560,},Annota
tions:map[string]string{io.kubernetes.container.hash: 3ec68edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:940ad13d868aa028ad59cebc224f4c6a6a3222d227a2ff1678169dee059cded5,PodSandboxId:b1cd1c6c53dfcc2a47d916c5f034ddb9f9653b9e2f1753687763c428e7053c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722460278139693769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa6f2ead807fd8989c520237bc4a8945,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9799d8ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2fc78bf0dd3395e0a56a2537b2c55634054f4579f1ca5c69fbaf3ec3f6a63c,PodSandboxId:2e964567dfd9769185445766ff4e569d2faab42ec4286657b627d4592c04bdd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722460278079140806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 957a986e645ccac6e707bb7ae314b349,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d8f1f37aff911317aea93930c83ae869ed2340345b363cbbe6c46a87649c87,PodSandboxId:5bc1c9920a6800771156d54099f8a3031cf9035ccf5a84dac5ffae0818b858b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722460278076923956,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f201a8c7b3d645418029f6358a6564,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37303f08-a4f1-4247-b919-6ea3a6bfe726 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.668085240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=012920c4-a541-48a8-8cc9-6dbb79e096a0 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.668160495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=012920c4-a541-48a8-8cc9-6dbb79e096a0 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.669169324Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36541c79-07d6-4466-97ab-16d6c69481ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.669812242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460298669783741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36541c79-07d6-4466-97ab-16d6c69481ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.670857637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f01f97cd-0ca6-496e-829b-7b9c5c5f9737 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.670935796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f01f97cd-0ca6-496e-829b-7b9c5c5f9737 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.672195305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7722b4b65198bb371ac49598e0848c9ff6f8a792ceeea5083ab255cc7fbe7552,PodSandboxId:30525fd5497131c81a2f7e126babd6128fa6f8c836159bc6e9d97a7a87a0059e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722460291581965504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-4ttsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9110150-4189-4445-b98b-cd02e1d6eca4,},Annotations:map[string]string{io.kubernetes.container.hash: 262f2824,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042ed400e997c514677f25b7a3f860cf036e7c6189b74e050a01beccfe7cd600,PodSandboxId:1ba6cf2b7c5337e8574c5421cd4791e17d8e742249a788d446f98d78e01fd6f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722460284539749157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e2e803a7-f33c-4ce8-94db-802e64802762,},Annotations:map[string]string{io.kubernetes.container.hash: d168be6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a2907d945aa2171b94afbda09d90f71516e94d23466770e563b8362682dcad,PodSandboxId:aad0d05704ea73b28286e933e5f47be85be886fedd81cf90079c567f4697d158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722460284351790608,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gmnzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b683
bac0-c8df-47ff-af69-9ec46451ff8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f68c910,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8fc3a6fb9f463b857a9b3a353f166ea690846271dd0f1120b242c124dc83502,PodSandboxId:2506b5500a942fd883c9d1f48d71be6956786611faa416617ae457daa11126c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722460278097155479,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dad672daa5f3be5fccc3ba07a9c4560,},Annota
tions:map[string]string{io.kubernetes.container.hash: 3ec68edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:940ad13d868aa028ad59cebc224f4c6a6a3222d227a2ff1678169dee059cded5,PodSandboxId:b1cd1c6c53dfcc2a47d916c5f034ddb9f9653b9e2f1753687763c428e7053c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722460278139693769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa6f2ead807fd8989c520237bc4a8945,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9799d8ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2fc78bf0dd3395e0a56a2537b2c55634054f4579f1ca5c69fbaf3ec3f6a63c,PodSandboxId:2e964567dfd9769185445766ff4e569d2faab42ec4286657b627d4592c04bdd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722460278079140806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 957a986e645ccac6e707bb7ae314b349,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d8f1f37aff911317aea93930c83ae869ed2340345b363cbbe6c46a87649c87,PodSandboxId:5bc1c9920a6800771156d54099f8a3031cf9035ccf5a84dac5ffae0818b858b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722460278076923956,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f201a8c7b3d645418029f6358a6564,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f01f97cd-0ca6-496e-829b-7b9c5c5f9737 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.708217117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=610d3059-6ffd-4615-b301-63050db80489 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.708344265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=610d3059-6ffd-4615-b301-63050db80489 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.709762236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bc56973-0e38-450e-be2e-8ffd5a263674 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.710210692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460298710186947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bc56973-0e38-450e-be2e-8ffd5a263674 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.710741348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39f85dfd-fc71-474b-96fe-bf2b2369bd51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.710793266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39f85dfd-fc71-474b-96fe-bf2b2369bd51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.710979339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7722b4b65198bb371ac49598e0848c9ff6f8a792ceeea5083ab255cc7fbe7552,PodSandboxId:30525fd5497131c81a2f7e126babd6128fa6f8c836159bc6e9d97a7a87a0059e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722460291581965504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-4ttsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9110150-4189-4445-b98b-cd02e1d6eca4,},Annotations:map[string]string{io.kubernetes.container.hash: 262f2824,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042ed400e997c514677f25b7a3f860cf036e7c6189b74e050a01beccfe7cd600,PodSandboxId:1ba6cf2b7c5337e8574c5421cd4791e17d8e742249a788d446f98d78e01fd6f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722460284539749157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e2e803a7-f33c-4ce8-94db-802e64802762,},Annotations:map[string]string{io.kubernetes.container.hash: d168be6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a2907d945aa2171b94afbda09d90f71516e94d23466770e563b8362682dcad,PodSandboxId:aad0d05704ea73b28286e933e5f47be85be886fedd81cf90079c567f4697d158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722460284351790608,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gmnzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b683
bac0-c8df-47ff-af69-9ec46451ff8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f68c910,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8fc3a6fb9f463b857a9b3a353f166ea690846271dd0f1120b242c124dc83502,PodSandboxId:2506b5500a942fd883c9d1f48d71be6956786611faa416617ae457daa11126c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722460278097155479,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dad672daa5f3be5fccc3ba07a9c4560,},Annota
tions:map[string]string{io.kubernetes.container.hash: 3ec68edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:940ad13d868aa028ad59cebc224f4c6a6a3222d227a2ff1678169dee059cded5,PodSandboxId:b1cd1c6c53dfcc2a47d916c5f034ddb9f9653b9e2f1753687763c428e7053c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722460278139693769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa6f2ead807fd8989c520237bc4a8945,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9799d8ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2fc78bf0dd3395e0a56a2537b2c55634054f4579f1ca5c69fbaf3ec3f6a63c,PodSandboxId:2e964567dfd9769185445766ff4e569d2faab42ec4286657b627d4592c04bdd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722460278079140806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 957a986e645ccac6e707bb7ae314b349,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d8f1f37aff911317aea93930c83ae869ed2340345b363cbbe6c46a87649c87,PodSandboxId:5bc1c9920a6800771156d54099f8a3031cf9035ccf5a84dac5ffae0818b858b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722460278076923956,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f201a8c7b3d645418029f6358a6564,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39f85dfd-fc71-474b-96fe-bf2b2369bd51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.745807225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fceee987-9097-4ea9-a280-35a8dac947d3 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.745902656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fceee987-9097-4ea9-a280-35a8dac947d3 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.747492048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebb5be77-89e7-4c47-a51b-c95bc2321503 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.747940044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460298747899571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebb5be77-89e7-4c47-a51b-c95bc2321503 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.748614352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aaa74cdb-1463-45cc-a941-13558d2883a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.748746282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aaa74cdb-1463-45cc-a941-13558d2883a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:11:38 test-preload-758694 crio[686]: time="2024-07-31 21:11:38.748938791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7722b4b65198bb371ac49598e0848c9ff6f8a792ceeea5083ab255cc7fbe7552,PodSandboxId:30525fd5497131c81a2f7e126babd6128fa6f8c836159bc6e9d97a7a87a0059e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722460291581965504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-4ttsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9110150-4189-4445-b98b-cd02e1d6eca4,},Annotations:map[string]string{io.kubernetes.container.hash: 262f2824,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042ed400e997c514677f25b7a3f860cf036e7c6189b74e050a01beccfe7cd600,PodSandboxId:1ba6cf2b7c5337e8574c5421cd4791e17d8e742249a788d446f98d78e01fd6f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722460284539749157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e2e803a7-f33c-4ce8-94db-802e64802762,},Annotations:map[string]string{io.kubernetes.container.hash: d168be6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a2907d945aa2171b94afbda09d90f71516e94d23466770e563b8362682dcad,PodSandboxId:aad0d05704ea73b28286e933e5f47be85be886fedd81cf90079c567f4697d158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722460284351790608,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gmnzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b683
bac0-c8df-47ff-af69-9ec46451ff8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7f68c910,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8fc3a6fb9f463b857a9b3a353f166ea690846271dd0f1120b242c124dc83502,PodSandboxId:2506b5500a942fd883c9d1f48d71be6956786611faa416617ae457daa11126c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722460278097155479,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dad672daa5f3be5fccc3ba07a9c4560,},Annota
tions:map[string]string{io.kubernetes.container.hash: 3ec68edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:940ad13d868aa028ad59cebc224f4c6a6a3222d227a2ff1678169dee059cded5,PodSandboxId:b1cd1c6c53dfcc2a47d916c5f034ddb9f9653b9e2f1753687763c428e7053c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722460278139693769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa6f2ead807fd8989c520237bc4a8945,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9799d8ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2fc78bf0dd3395e0a56a2537b2c55634054f4579f1ca5c69fbaf3ec3f6a63c,PodSandboxId:2e964567dfd9769185445766ff4e569d2faab42ec4286657b627d4592c04bdd3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722460278079140806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 957a986e645ccac6e707bb7ae314b349,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d8f1f37aff911317aea93930c83ae869ed2340345b363cbbe6c46a87649c87,PodSandboxId:5bc1c9920a6800771156d54099f8a3031cf9035ccf5a84dac5ffae0818b858b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722460278076923956,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-758694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f201a8c7b3d645418029f6358a6564,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aaa74cdb-1463-45cc-a941-13558d2883a8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7722b4b65198b       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   30525fd549713       coredns-6d4b75cb6d-4ttsq
	042ed400e997c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       2                   1ba6cf2b7c533       storage-provisioner
	d0a2907d945aa       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   aad0d05704ea7       kube-proxy-gmnzg
	940ad13d868aa       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   b1cd1c6c53dfc       kube-apiserver-test-preload-758694
	a8fc3a6fb9f46       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   2506b5500a942       etcd-test-preload-758694
	df2fc78bf0dd3       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   2e964567dfd97       kube-scheduler-test-preload-758694
	84d8f1f37aff9       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   5bc1c9920a680       kube-controller-manager-test-preload-758694
	
	
	==> coredns [7722b4b65198bb371ac49598e0848c9ff6f8a792ceeea5083ab255cc7fbe7552] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:41869 - 8590 "HINFO IN 8363449627003755590.7939627028226254048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020894996s
	
	
	==> describe nodes <==
	Name:               test-preload-758694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-758694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=test-preload-758694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_10_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:10:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-758694
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:11:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:11:32 +0000   Wed, 31 Jul 2024 21:10:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:11:32 +0000   Wed, 31 Jul 2024 21:10:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:11:32 +0000   Wed, 31 Jul 2024 21:10:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:11:32 +0000   Wed, 31 Jul 2024 21:11:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    test-preload-758694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 02853a72fdf44cf5a99acce125e81e3a
	  System UUID:                02853a72-fdf4-4cf5-a99a-cce125e81e3a
	  Boot ID:                    c7b4e65f-7b60-4c4e-bef9-23e285ec2859
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4ttsq                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     75s
	  kube-system                 etcd-test-preload-758694                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-test-preload-758694             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-test-preload-758694    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-gmnzg                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-scheduler-test-preload-758694             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node test-preload-758694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node test-preload-758694 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                kubelet          Node test-preload-758694 status is now: NodeHasSufficientPID
	  Normal  NodeReady                78s                kubelet          Node test-preload-758694 status is now: NodeReady
	  Normal  RegisteredNode           76s                node-controller  Node test-preload-758694 event: Registered Node test-preload-758694 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-758694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-758694 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-758694 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-758694 event: Registered Node test-preload-758694 in Controller
	
	
	==> dmesg <==
	[Jul31 21:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047458] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035905] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.707490] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.961179] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.420133] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 21:11] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.059008] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064061] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.190839] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.110032] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.264670] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[ +13.028610] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.055409] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.734416] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +6.875467] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.609617] systemd-fstab-generator[1775]: Ignoring "noauto" option for root device
	[  +5.745867] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [a8fc3a6fb9f463b857a9b3a353f166ea690846271dd0f1120b242c124dc83502] <==
	{"level":"info","ts":"2024-07-31T21:11:18.397Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2f9167931180af7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T21:11:18.397Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T21:11:18.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 switched to configuration voters=(12896363717722639095)"}
	{"level":"info","ts":"2024-07-31T21:11:18.399Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"694778b4375dcf94","local-member-id":"b2f9167931180af7","added-peer-id":"b2f9167931180af7","added-peer-peer-urls":["https://192.168.39.112:2380"]}
	{"level":"info","ts":"2024-07-31T21:11:18.399Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"694778b4375dcf94","local-member-id":"b2f9167931180af7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:11:18.399Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:11:18.405Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:11:18.405Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.112:2380"}
	{"level":"info","ts":"2024-07-31T21:11:18.405Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.112:2380"}
	{"level":"info","ts":"2024-07-31T21:11:18.406Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:11:18.406Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2f9167931180af7","initial-advertise-peer-urls":["https://192.168.39.112:2380"],"listen-peer-urls":["https://192.168.39.112:2380"],"advertise-client-urls":["https://192.168.39.112:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.112:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:11:20.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:11:20.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:11:20.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgPreVoteResp from b2f9167931180af7 at term 2"}
	{"level":"info","ts":"2024-07-31T21:11:20.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:11:20.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 received MsgVoteResp from b2f9167931180af7 at term 3"}
	{"level":"info","ts":"2024-07-31T21:11:20.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2f9167931180af7 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:11:20.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2f9167931180af7 elected leader b2f9167931180af7 at term 3"}
	{"level":"info","ts":"2024-07-31T21:11:20.281Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2f9167931180af7","local-member-attributes":"{Name:test-preload-758694 ClientURLs:[https://192.168.39.112:2379]}","request-path":"/0/members/b2f9167931180af7/attributes","cluster-id":"694778b4375dcf94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:11:20.281Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:11:20.283Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:11:20.283Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:11:20.285Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.112:2379"}
	{"level":"info","ts":"2024-07-31T21:11:20.295Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:11:20.295Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:11:39 up 0 min,  0 users,  load average: 0.43, 0.12, 0.04
	Linux test-preload-758694 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [940ad13d868aa028ad59cebc224f4c6a6a3222d227a2ff1678169dee059cded5] <==
	I0731 21:11:22.585251       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0731 21:11:22.585282       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0731 21:11:22.585342       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0731 21:11:22.585356       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0731 21:11:22.602780       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 21:11:22.613213       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 21:11:22.671975       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 21:11:22.685568       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0731 21:11:22.686045       1 shared_informer.go:262] Caches are synced for node_authorizer
	E0731 21:11:22.698638       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0731 21:11:22.758968       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 21:11:22.766377       1 cache.go:39] Caches are synced for autoregister controller
	I0731 21:11:22.766547       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 21:11:22.766832       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 21:11:22.768396       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 21:11:23.260617       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 21:11:23.566865       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 21:11:24.119798       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 21:11:24.137285       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 21:11:24.181763       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 21:11:24.206811       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 21:11:24.219606       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 21:11:24.621178       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0731 21:11:35.015827       1 controller.go:611] quota admission added evaluator for: endpoints
	I0731 21:11:35.120944       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [84d8f1f37aff911317aea93930c83ae869ed2340345b363cbbe6c46a87649c87] <==
	I0731 21:11:35.029180       1 shared_informer.go:262] Caches are synced for node
	I0731 21:11:35.029217       1 range_allocator.go:173] Starting range CIDR allocator
	I0731 21:11:35.029221       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0731 21:11:35.029230       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0731 21:11:35.036419       1 shared_informer.go:262] Caches are synced for crt configmap
	I0731 21:11:35.037267       1 shared_informer.go:262] Caches are synced for job
	I0731 21:11:35.041808       1 shared_informer.go:262] Caches are synced for disruption
	I0731 21:11:35.041837       1 disruption.go:371] Sending events to api server.
	I0731 21:11:35.045953       1 shared_informer.go:262] Caches are synced for HPA
	I0731 21:11:35.047174       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0731 21:11:35.049457       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0731 21:11:35.050734       1 shared_informer.go:262] Caches are synced for ephemeral
	I0731 21:11:35.131178       1 shared_informer.go:262] Caches are synced for taint
	I0731 21:11:35.131404       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0731 21:11:35.131541       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-758694. Assuming now as a timestamp.
	I0731 21:11:35.131593       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0731 21:11:35.131900       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0731 21:11:35.132441       1 event.go:294] "Event occurred" object="test-preload-758694" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-758694 event: Registered Node test-preload-758694 in Controller"
	I0731 21:11:35.136250       1 shared_informer.go:262] Caches are synced for daemon sets
	I0731 21:11:35.192026       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 21:11:35.211480       1 shared_informer.go:262] Caches are synced for stateful set
	I0731 21:11:35.221049       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 21:11:35.666361       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 21:11:35.704785       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 21:11:35.704864       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [d0a2907d945aa2171b94afbda09d90f71516e94d23466770e563b8362682dcad] <==
	I0731 21:11:24.552730       1 node.go:163] Successfully retrieved node IP: 192.168.39.112
	I0731 21:11:24.552803       1 server_others.go:138] "Detected node IP" address="192.168.39.112"
	I0731 21:11:24.552835       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 21:11:24.609755       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 21:11:24.609858       1 server_others.go:206] "Using iptables Proxier"
	I0731 21:11:24.610438       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 21:11:24.611283       1 server.go:661] "Version info" version="v1.24.4"
	I0731 21:11:24.611568       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:11:24.613349       1 config.go:317] "Starting service config controller"
	I0731 21:11:24.613434       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 21:11:24.613520       1 config.go:226] "Starting endpoint slice config controller"
	I0731 21:11:24.613542       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 21:11:24.615219       1 config.go:444] "Starting node config controller"
	I0731 21:11:24.615256       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 21:11:24.713828       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0731 21:11:24.713908       1 shared_informer.go:262] Caches are synced for service config
	I0731 21:11:24.715794       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [df2fc78bf0dd3395e0a56a2537b2c55634054f4579f1ca5c69fbaf3ec3f6a63c] <==
	I0731 21:11:19.317586       1 serving.go:348] Generated self-signed cert in-memory
	W0731 21:11:22.617367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:11:22.617452       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:11:22.617469       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:11:22.617478       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:11:22.678050       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0731 21:11:22.678086       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:11:22.689071       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0731 21:11:22.689237       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:11:22.689286       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:11:22.689380       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:11:22.790379       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439290    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b683bac0-c8df-47ff-af69-9ec46451ff8d-xtables-lock\") pod \"kube-proxy-gmnzg\" (UID: \"b683bac0-c8df-47ff-af69-9ec46451ff8d\") " pod="kube-system/kube-proxy-gmnzg"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439395    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume\") pod \"coredns-6d4b75cb6d-4ttsq\" (UID: \"e9110150-4189-4445-b98b-cd02e1d6eca4\") " pod="kube-system/coredns-6d4b75cb6d-4ttsq"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439440    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b683bac0-c8df-47ff-af69-9ec46451ff8d-kube-proxy\") pod \"kube-proxy-gmnzg\" (UID: \"b683bac0-c8df-47ff-af69-9ec46451ff8d\") " pod="kube-system/kube-proxy-gmnzg"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439462    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gj26\" (UniqueName: \"kubernetes.io/projected/e2e803a7-f33c-4ce8-94db-802e64802762-kube-api-access-4gj26\") pod \"storage-provisioner\" (UID: \"e2e803a7-f33c-4ce8-94db-802e64802762\") " pod="kube-system/storage-provisioner"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439481    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4wgn\" (UniqueName: \"kubernetes.io/projected/b683bac0-c8df-47ff-af69-9ec46451ff8d-kube-api-access-l4wgn\") pod \"kube-proxy-gmnzg\" (UID: \"b683bac0-c8df-47ff-af69-9ec46451ff8d\") " pod="kube-system/kube-proxy-gmnzg"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439498    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b683bac0-c8df-47ff-af69-9ec46451ff8d-lib-modules\") pod \"kube-proxy-gmnzg\" (UID: \"b683bac0-c8df-47ff-af69-9ec46451ff8d\") " pod="kube-system/kube-proxy-gmnzg"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439517    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rxw\" (UniqueName: \"kubernetes.io/projected/e9110150-4189-4445-b98b-cd02e1d6eca4-kube-api-access-m2rxw\") pod \"coredns-6d4b75cb6d-4ttsq\" (UID: \"e9110150-4189-4445-b98b-cd02e1d6eca4\") " pod="kube-system/coredns-6d4b75cb6d-4ttsq"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439539    1083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e2e803a7-f33c-4ce8-94db-802e64802762-tmp\") pod \"storage-provisioner\" (UID: \"e2e803a7-f33c-4ce8-94db-802e64802762\") " pod="kube-system/storage-provisioner"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: I0731 21:11:23.439555    1083 reconciler.go:159] "Reconciler: start to sync state"
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: E0731 21:11:23.544116    1083 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 21:11:23 test-preload-758694 kubelet[1083]: E0731 21:11:23.544585    1083 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume podName:e9110150-4189-4445-b98b-cd02e1d6eca4 nodeName:}" failed. No retries permitted until 2024-07-31 21:11:24.044557053 +0000 UTC m=+6.801927355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume") pod "coredns-6d4b75cb6d-4ttsq" (UID: "e9110150-4189-4445-b98b-cd02e1d6eca4") : object "kube-system"/"coredns" not registered
	Jul 31 21:11:24 test-preload-758694 kubelet[1083]: E0731 21:11:24.046877    1083 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 21:11:24 test-preload-758694 kubelet[1083]: E0731 21:11:24.046935    1083 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume podName:e9110150-4189-4445-b98b-cd02e1d6eca4 nodeName:}" failed. No retries permitted until 2024-07-31 21:11:25.046921278 +0000 UTC m=+7.804291575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume") pod "coredns-6d4b75cb6d-4ttsq" (UID: "e9110150-4189-4445-b98b-cd02e1d6eca4") : object "kube-system"/"coredns" not registered
	Jul 31 21:11:24 test-preload-758694 kubelet[1083]: I0731 21:11:24.512614    1083 scope.go:110] "RemoveContainer" containerID="d8d04100248a300f28bef4bb6aa96d08188a4b2b07aa019595071351fcf1e775"
	Jul 31 21:11:25 test-preload-758694 kubelet[1083]: E0731 21:11:25.053330    1083 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 21:11:25 test-preload-758694 kubelet[1083]: E0731 21:11:25.053453    1083 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume podName:e9110150-4189-4445-b98b-cd02e1d6eca4 nodeName:}" failed. No retries permitted until 2024-07-31 21:11:27.053436556 +0000 UTC m=+9.810806843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume") pod "coredns-6d4b75cb6d-4ttsq" (UID: "e9110150-4189-4445-b98b-cd02e1d6eca4") : object "kube-system"/"coredns" not registered
	Jul 31 21:11:25 test-preload-758694 kubelet[1083]: E0731 21:11:25.472246    1083 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-4ttsq" podUID=e9110150-4189-4445-b98b-cd02e1d6eca4
	Jul 31 21:11:25 test-preload-758694 kubelet[1083]: I0731 21:11:25.477570    1083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=baf0c755-df95-481f-9338-68611f43c185 path="/var/lib/kubelet/pods/baf0c755-df95-481f-9338-68611f43c185/volumes"
	Jul 31 21:11:25 test-preload-758694 kubelet[1083]: I0731 21:11:25.523745    1083 scope.go:110] "RemoveContainer" containerID="d8d04100248a300f28bef4bb6aa96d08188a4b2b07aa019595071351fcf1e775"
	Jul 31 21:11:25 test-preload-758694 kubelet[1083]: I0731 21:11:25.524451    1083 scope.go:110] "RemoveContainer" containerID="042ed400e997c514677f25b7a3f860cf036e7c6189b74e050a01beccfe7cd600"
	Jul 31 21:11:25 test-preload-758694 kubelet[1083]: E0731 21:11:25.524726    1083 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e2e803a7-f33c-4ce8-94db-802e64802762)\"" pod="kube-system/storage-provisioner" podUID=e2e803a7-f33c-4ce8-94db-802e64802762
	Jul 31 21:11:26 test-preload-758694 kubelet[1083]: I0731 21:11:26.527851    1083 scope.go:110] "RemoveContainer" containerID="042ed400e997c514677f25b7a3f860cf036e7c6189b74e050a01beccfe7cd600"
	Jul 31 21:11:26 test-preload-758694 kubelet[1083]: E0731 21:11:26.528338    1083 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e2e803a7-f33c-4ce8-94db-802e64802762)\"" pod="kube-system/storage-provisioner" podUID=e2e803a7-f33c-4ce8-94db-802e64802762
	Jul 31 21:11:27 test-preload-758694 kubelet[1083]: E0731 21:11:27.067949    1083 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 21:11:27 test-preload-758694 kubelet[1083]: E0731 21:11:27.068043    1083 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume podName:e9110150-4189-4445-b98b-cd02e1d6eca4 nodeName:}" failed. No retries permitted until 2024-07-31 21:11:31.068027071 +0000 UTC m=+13.825397357 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e9110150-4189-4445-b98b-cd02e1d6eca4-config-volume") pod "coredns-6d4b75cb6d-4ttsq" (UID: "e9110150-4189-4445-b98b-cd02e1d6eca4") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [042ed400e997c514677f25b7a3f860cf036e7c6189b74e050a01beccfe7cd600] <==
	I0731 21:11:24.655185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 21:11:24.656827       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-758694 -n test-preload-758694
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-758694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-758694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-758694
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-758694: (1.157579617s)
--- FAIL: TestPreload (180.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (359.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m52.069769607s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-202332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-202332" primary control-plane node in "kubernetes-upgrade-202332" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:17:34.939754 1141656 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:17:34.939864 1141656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:17:34.939869 1141656 out.go:304] Setting ErrFile to fd 2...
	I0731 21:17:34.939873 1141656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:17:34.940087 1141656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:17:34.940794 1141656 out.go:298] Setting JSON to false
	I0731 21:17:34.941970 1141656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18006,"bootTime":1722442649,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:17:34.942043 1141656 start.go:139] virtualization: kvm guest
	I0731 21:17:34.944143 1141656 out.go:177] * [kubernetes-upgrade-202332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:17:34.945463 1141656 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:17:34.945489 1141656 notify.go:220] Checking for updates...
	I0731 21:17:34.948015 1141656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:17:34.949284 1141656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:17:34.950603 1141656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:17:34.951789 1141656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:17:34.953035 1141656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:17:34.954861 1141656 config.go:182] Loaded profile config "cert-expiration-238338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.954981 1141656 config.go:182] Loaded profile config "cert-options-425308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.955119 1141656 config.go:182] Loaded profile config "pause-355751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.955241 1141656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:17:34.996580 1141656 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:17:34.998984 1141656 start.go:297] selected driver: kvm2
	I0731 21:17:34.999025 1141656 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:17:34.999057 1141656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:17:35.000305 1141656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:17:35.000414 1141656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:17:35.018244 1141656 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:17:35.018319 1141656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:17:35.018619 1141656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 21:17:35.018683 1141656 cni.go:84] Creating CNI manager for ""
	I0731 21:17:35.018699 1141656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:17:35.018708 1141656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:17:35.018791 1141656 start.go:340] cluster config:
	{Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:17:35.018922 1141656 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:17:35.020671 1141656 out.go:177] * Starting "kubernetes-upgrade-202332" primary control-plane node in "kubernetes-upgrade-202332" cluster
	I0731 21:17:35.021864 1141656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:17:35.021932 1141656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:17:35.021955 1141656 cache.go:56] Caching tarball of preloaded images
	I0731 21:17:35.022100 1141656 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:17:35.022121 1141656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 21:17:35.022260 1141656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/config.json ...
	I0731 21:17:35.022294 1141656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/config.json: {Name:mkcf822990ecc74db026ff06d02158fd1e1e3d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:17:35.022474 1141656 start.go:360] acquireMachinesLock for kubernetes-upgrade-202332: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:17:57.105226 1141656 start.go:364] duration metric: took 22.082719407s to acquireMachinesLock for "kubernetes-upgrade-202332"
	I0731 21:17:57.105309 1141656 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:17:57.105436 1141656 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 21:17:57.107432 1141656 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 21:17:57.107685 1141656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:17:57.107756 1141656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:17:57.125545 1141656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0731 21:17:57.126121 1141656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:17:57.126793 1141656 main.go:141] libmachine: Using API Version  1
	I0731 21:17:57.126813 1141656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:17:57.127189 1141656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:17:57.127379 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetMachineName
	I0731 21:17:57.127526 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:17:57.127699 1141656 start.go:159] libmachine.API.Create for "kubernetes-upgrade-202332" (driver="kvm2")
	I0731 21:17:57.127726 1141656 client.go:168] LocalClient.Create starting
	I0731 21:17:57.127766 1141656 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 21:17:57.127816 1141656 main.go:141] libmachine: Decoding PEM data...
	I0731 21:17:57.127842 1141656 main.go:141] libmachine: Parsing certificate...
	I0731 21:17:57.127910 1141656 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 21:17:57.127937 1141656 main.go:141] libmachine: Decoding PEM data...
	I0731 21:17:57.127961 1141656 main.go:141] libmachine: Parsing certificate...
	I0731 21:17:57.127984 1141656 main.go:141] libmachine: Running pre-create checks...
	I0731 21:17:57.128022 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .PreCreateCheck
	I0731 21:17:57.128415 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetConfigRaw
	I0731 21:17:57.128914 1141656 main.go:141] libmachine: Creating machine...
	I0731 21:17:57.128936 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .Create
	I0731 21:17:57.129097 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Creating KVM machine...
	I0731 21:17:57.130454 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found existing default KVM network
	I0731 21:17:57.132394 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:57.132185 1142567 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000211720}
	I0731 21:17:57.132433 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | created network xml: 
	I0731 21:17:57.132448 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | <network>
	I0731 21:17:57.132463 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |   <name>mk-kubernetes-upgrade-202332</name>
	I0731 21:17:57.132475 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |   <dns enable='no'/>
	I0731 21:17:57.132500 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |   
	I0731 21:17:57.132513 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 21:17:57.132524 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |     <dhcp>
	I0731 21:17:57.132533 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 21:17:57.132544 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |     </dhcp>
	I0731 21:17:57.132556 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |   </ip>
	I0731 21:17:57.132566 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG |   
	I0731 21:17:57.132601 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | </network>
	I0731 21:17:57.132631 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | 
	I0731 21:17:57.138453 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | trying to create private KVM network mk-kubernetes-upgrade-202332 192.168.39.0/24...
	I0731 21:17:57.219570 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | private KVM network mk-kubernetes-upgrade-202332 192.168.39.0/24 created
	I0731 21:17:57.219609 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332 ...
	I0731 21:17:57.219635 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 21:17:57.219651 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:57.219528 1142567 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:17:57.219674 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 21:17:57.512709 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:57.512548 1142567 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa...
	I0731 21:17:57.591326 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:57.591186 1142567 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/kubernetes-upgrade-202332.rawdisk...
	I0731 21:17:57.591359 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Writing magic tar header
	I0731 21:17:57.591371 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Writing SSH key tar header
	I0731 21:17:57.591379 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:57.591304 1142567 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332 ...
	I0731 21:17:57.591429 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332
	I0731 21:17:57.591510 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332 (perms=drwx------)
	I0731 21:17:57.591556 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 21:17:57.591576 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 21:17:57.591592 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 21:17:57.591603 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 21:17:57.591614 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 21:17:57.591627 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 21:17:57.591646 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:17:57.591659 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Creating domain...
	I0731 21:17:57.591672 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 21:17:57.591686 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 21:17:57.591701 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Checking permissions on dir: /home/jenkins
	I0731 21:17:57.591711 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Checking permissions on dir: /home
	I0731 21:17:57.591722 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Skipping /home - not owner
	I0731 21:17:57.592884 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) define libvirt domain using xml: 
	I0731 21:17:57.592914 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) <domain type='kvm'>
	I0731 21:17:57.592926 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   <name>kubernetes-upgrade-202332</name>
	I0731 21:17:57.592941 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   <memory unit='MiB'>2200</memory>
	I0731 21:17:57.592974 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   <vcpu>2</vcpu>
	I0731 21:17:57.592998 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   <features>
	I0731 21:17:57.593007 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <acpi/>
	I0731 21:17:57.593013 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <apic/>
	I0731 21:17:57.593024 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <pae/>
	I0731 21:17:57.593032 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     
	I0731 21:17:57.593044 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   </features>
	I0731 21:17:57.593054 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   <cpu mode='host-passthrough'>
	I0731 21:17:57.593066 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   
	I0731 21:17:57.593076 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   </cpu>
	I0731 21:17:57.593088 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   <os>
	I0731 21:17:57.593103 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <type>hvm</type>
	I0731 21:17:57.593205 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <boot dev='cdrom'/>
	I0731 21:17:57.593257 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <boot dev='hd'/>
	I0731 21:17:57.593289 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <bootmenu enable='no'/>
	I0731 21:17:57.593315 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   </os>
	I0731 21:17:57.593328 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   <devices>
	I0731 21:17:57.593340 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <disk type='file' device='cdrom'>
	I0731 21:17:57.593357 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/boot2docker.iso'/>
	I0731 21:17:57.593375 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <target dev='hdc' bus='scsi'/>
	I0731 21:17:57.593406 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <readonly/>
	I0731 21:17:57.593431 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     </disk>
	I0731 21:17:57.593457 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <disk type='file' device='disk'>
	I0731 21:17:57.593474 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 21:17:57.593519 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/kubernetes-upgrade-202332.rawdisk'/>
	I0731 21:17:57.593540 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <target dev='hda' bus='virtio'/>
	I0731 21:17:57.593553 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     </disk>
	I0731 21:17:57.593570 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <interface type='network'>
	I0731 21:17:57.593584 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <source network='mk-kubernetes-upgrade-202332'/>
	I0731 21:17:57.593595 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <model type='virtio'/>
	I0731 21:17:57.593606 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     </interface>
	I0731 21:17:57.593617 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <interface type='network'>
	I0731 21:17:57.593628 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <source network='default'/>
	I0731 21:17:57.593639 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <model type='virtio'/>
	I0731 21:17:57.593650 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     </interface>
	I0731 21:17:57.593661 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <serial type='pty'>
	I0731 21:17:57.593674 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <target port='0'/>
	I0731 21:17:57.593692 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     </serial>
	I0731 21:17:57.593705 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <console type='pty'>
	I0731 21:17:57.593717 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <target type='serial' port='0'/>
	I0731 21:17:57.593749 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     </console>
	I0731 21:17:57.593778 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     <rng model='virtio'>
	I0731 21:17:57.593793 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)       <backend model='random'>/dev/random</backend>
	I0731 21:17:57.593800 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     </rng>
	I0731 21:17:57.593810 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     
	I0731 21:17:57.593824 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)     
	I0731 21:17:57.593834 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332)   </devices>
	I0731 21:17:57.593844 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) </domain>
	I0731 21:17:57.593857 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) 
	I0731 21:17:57.597788 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:e6:b5:ae in network default
	I0731 21:17:57.598580 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Ensuring networks are active...
	I0731 21:17:57.598611 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:17:57.599468 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Ensuring network default is active
	I0731 21:17:57.599876 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Ensuring network mk-kubernetes-upgrade-202332 is active
	I0731 21:17:57.600507 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Getting domain xml...
	I0731 21:17:57.601424 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Creating domain...
	I0731 21:17:58.949640 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Waiting to get IP...
	I0731 21:17:58.950687 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:17:58.951169 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:17:58.951243 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:58.951160 1142567 retry.go:31] will retry after 270.228956ms: waiting for machine to come up
	I0731 21:17:59.222772 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:17:59.223374 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:17:59.223402 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:59.223332 1142567 retry.go:31] will retry after 303.90702ms: waiting for machine to come up
	I0731 21:17:59.529093 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:17:59.529667 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:17:59.529690 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:59.529635 1142567 retry.go:31] will retry after 325.494962ms: waiting for machine to come up
	I0731 21:17:59.857129 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:17:59.857682 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:17:59.857707 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:17:59.857628 1142567 retry.go:31] will retry after 513.659325ms: waiting for machine to come up
	I0731 21:18:00.373323 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:00.373777 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:00.373805 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:00.373713 1142567 retry.go:31] will retry after 534.957794ms: waiting for machine to come up
	I0731 21:18:00.910567 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:00.911126 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:00.911156 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:00.911063 1142567 retry.go:31] will retry after 877.678048ms: waiting for machine to come up
	I0731 21:18:01.790278 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:01.790884 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:01.790922 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:01.790848 1142567 retry.go:31] will retry after 988.460436ms: waiting for machine to come up
	I0731 21:18:02.780722 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:02.781193 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:02.781221 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:02.781159 1142567 retry.go:31] will retry after 982.353264ms: waiting for machine to come up
	I0731 21:18:03.765249 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:03.765730 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:03.765760 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:03.765678 1142567 retry.go:31] will retry after 1.361661236s: waiting for machine to come up
	I0731 21:18:05.129277 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:05.129805 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:05.129839 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:05.129750 1142567 retry.go:31] will retry after 2.210912702s: waiting for machine to come up
	I0731 21:18:07.342030 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:07.342617 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:07.342648 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:07.342559 1142567 retry.go:31] will retry after 2.317203632s: waiting for machine to come up
	I0731 21:18:09.661793 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:09.662326 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:09.662354 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:09.662283 1142567 retry.go:31] will retry after 2.346626812s: waiting for machine to come up
	I0731 21:18:12.010568 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:12.011110 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:12.011138 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:12.011042 1142567 retry.go:31] will retry after 3.73644592s: waiting for machine to come up
	I0731 21:18:16.269168 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:16.270233 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find current IP address of domain kubernetes-upgrade-202332 in network mk-kubernetes-upgrade-202332
	I0731 21:18:16.270262 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | I0731 21:18:16.270184 1142567 retry.go:31] will retry after 4.554865221s: waiting for machine to come up
	I0731 21:18:20.829542 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:20.830052 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Found IP for machine: 192.168.39.10
	I0731 21:18:20.830079 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Reserving static IP address...
	I0731 21:18:20.830093 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has current primary IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:20.830534 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-202332", mac: "52:54:00:a5:c8:0d", ip: "192.168.39.10"} in network mk-kubernetes-upgrade-202332
	I0731 21:18:20.916953 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Getting to WaitForSSH function...
	I0731 21:18:20.916985 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Reserved static IP address: 192.168.39.10
	I0731 21:18:20.917033 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Waiting for SSH to be available...
	I0731 21:18:20.920051 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:20.920559 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:20.920589 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:20.920671 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Using SSH client type: external
	I0731 21:18:20.920697 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa (-rw-------)
	I0731 21:18:20.920726 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:18:20.920737 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | About to run SSH command:
	I0731 21:18:20.920747 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | exit 0
	I0731 21:18:21.044070 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | SSH cmd err, output: <nil>: 
	I0731 21:18:21.044349 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) KVM machine creation complete!
	I0731 21:18:21.044805 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetConfigRaw
	I0731 21:18:21.045403 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:18:21.045606 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:18:21.045891 1141656 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 21:18:21.045910 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetState
	I0731 21:18:21.047369 1141656 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 21:18:21.047388 1141656 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 21:18:21.047395 1141656 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 21:18:21.047404 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:21.049929 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.050273 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:21.050296 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.050431 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:21.050631 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.050798 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.050927 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:21.051088 1141656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:21.051308 1141656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:18:21.051322 1141656 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 21:18:21.147444 1141656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:18:21.147476 1141656 main.go:141] libmachine: Detecting the provisioner...
	I0731 21:18:21.147485 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:21.150367 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.150745 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:21.150775 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.150967 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:21.151177 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.151341 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.151509 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:21.151695 1141656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:21.151875 1141656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:18:21.151886 1141656 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 21:18:21.252643 1141656 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 21:18:21.252763 1141656 main.go:141] libmachine: found compatible host: buildroot
	I0731 21:18:21.252779 1141656 main.go:141] libmachine: Provisioning with buildroot...
	I0731 21:18:21.252792 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetMachineName
	I0731 21:18:21.253093 1141656 buildroot.go:166] provisioning hostname "kubernetes-upgrade-202332"
	I0731 21:18:21.253129 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetMachineName
	I0731 21:18:21.253346 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:21.256149 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.256518 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:21.256574 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.256729 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:21.256943 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.257101 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.257239 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:21.257387 1141656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:21.257584 1141656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:18:21.257598 1141656 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-202332 && echo "kubernetes-upgrade-202332" | sudo tee /etc/hostname
	I0731 21:18:21.371550 1141656 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-202332
	
	I0731 21:18:21.371590 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:21.374560 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.374974 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:21.375001 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.375202 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:21.375415 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.375621 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.375787 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:21.375987 1141656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:21.376240 1141656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:18:21.376263 1141656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-202332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-202332/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-202332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:18:21.486204 1141656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:18:21.486243 1141656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:18:21.486286 1141656 buildroot.go:174] setting up certificates
	I0731 21:18:21.486297 1141656 provision.go:84] configureAuth start
	I0731 21:18:21.486308 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetMachineName
	I0731 21:18:21.486621 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetIP
	I0731 21:18:21.489282 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.489756 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:21.489801 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.489982 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:21.492409 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.492783 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:21.492811 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.493013 1141656 provision.go:143] copyHostCerts
	I0731 21:18:21.493081 1141656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:18:21.493091 1141656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:18:21.493732 1141656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:18:21.493872 1141656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:18:21.493884 1141656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:18:21.493907 1141656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:18:21.493972 1141656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:18:21.493980 1141656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:18:21.493999 1141656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:18:21.494064 1141656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-202332 san=[127.0.0.1 192.168.39.10 kubernetes-upgrade-202332 localhost minikube]
	I0731 21:18:21.660280 1141656 provision.go:177] copyRemoteCerts
	I0731 21:18:21.660352 1141656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:18:21.660380 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:21.663212 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.663559 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:21.663596 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.663805 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:21.664000 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.664191 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:21.664357 1141656 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa Username:docker}
	I0731 21:18:21.742075 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:18:21.766866 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 21:18:21.793239 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:18:21.819826 1141656 provision.go:87] duration metric: took 333.510319ms to configureAuth
	I0731 21:18:21.819863 1141656 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:18:21.820027 1141656 config.go:182] Loaded profile config "kubernetes-upgrade-202332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:18:21.820163 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:21.823146 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.823446 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:21.823475 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:21.823719 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:21.823929 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.824108 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:21.824253 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:21.824404 1141656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:21.824591 1141656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:18:21.824611 1141656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:18:22.072778 1141656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:18:22.072812 1141656 main.go:141] libmachine: Checking connection to Docker...
	I0731 21:18:22.072820 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetURL
	I0731 21:18:22.074127 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | Using libvirt version 6000000
	I0731 21:18:22.076415 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.076793 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:22.076821 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.077038 1141656 main.go:141] libmachine: Docker is up and running!
	I0731 21:18:22.077057 1141656 main.go:141] libmachine: Reticulating splines...
	I0731 21:18:22.077066 1141656 client.go:171] duration metric: took 24.949332044s to LocalClient.Create
	I0731 21:18:22.077095 1141656 start.go:167] duration metric: took 24.949394867s to libmachine.API.Create "kubernetes-upgrade-202332"
	I0731 21:18:22.077109 1141656 start.go:293] postStartSetup for "kubernetes-upgrade-202332" (driver="kvm2")
	I0731 21:18:22.077123 1141656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:18:22.077149 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:18:22.077423 1141656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:18:22.077450 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:22.079672 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.079993 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:22.080031 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.080172 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:22.080372 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:22.080542 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:22.080670 1141656 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa Username:docker}
	I0731 21:18:22.161929 1141656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:18:22.166259 1141656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:18:22.166294 1141656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:18:22.166363 1141656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:18:22.166434 1141656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:18:22.166520 1141656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:18:22.175622 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:18:22.199807 1141656 start.go:296] duration metric: took 122.68105ms for postStartSetup
	I0731 21:18:22.199879 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetConfigRaw
	I0731 21:18:22.200668 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetIP
	I0731 21:18:22.203556 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.203956 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:22.203986 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.204304 1141656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/config.json ...
	I0731 21:18:22.204582 1141656 start.go:128] duration metric: took 25.099132641s to createHost
	I0731 21:18:22.204615 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:22.206775 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.207099 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:22.207122 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.207285 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:22.207474 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:22.207647 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:22.207797 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:22.207973 1141656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:22.208210 1141656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:18:22.208235 1141656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 21:18:22.313068 1141656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722460702.264503049
	
	I0731 21:18:22.313096 1141656 fix.go:216] guest clock: 1722460702.264503049
	I0731 21:18:22.313104 1141656 fix.go:229] Guest: 2024-07-31 21:18:22.264503049 +0000 UTC Remote: 2024-07-31 21:18:22.204599102 +0000 UTC m=+47.306594440 (delta=59.903947ms)
	I0731 21:18:22.313126 1141656 fix.go:200] guest clock delta is within tolerance: 59.903947ms
	I0731 21:18:22.313131 1141656 start.go:83] releasing machines lock for "kubernetes-upgrade-202332", held for 25.207855752s
	I0731 21:18:22.313159 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:18:22.313505 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetIP
	I0731 21:18:22.316574 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.317042 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:22.317080 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.317376 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:18:22.317972 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:18:22.318212 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:18:22.318307 1141656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:18:22.318361 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:22.318445 1141656 ssh_runner.go:195] Run: cat /version.json
	I0731 21:18:22.318463 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:18:22.321097 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.321568 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:22.321601 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.321692 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.321908 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:22.322126 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:22.322220 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:22.322262 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:22.322380 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:22.322465 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:18:22.322642 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:18:22.322665 1141656 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa Username:docker}
	I0731 21:18:22.322815 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:18:22.322987 1141656 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa Username:docker}
	I0731 21:18:22.396896 1141656 ssh_runner.go:195] Run: systemctl --version
	I0731 21:18:22.423515 1141656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:18:22.590432 1141656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:18:22.597059 1141656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:18:22.597147 1141656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:18:22.619929 1141656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:18:22.619956 1141656 start.go:495] detecting cgroup driver to use...
	I0731 21:18:22.620017 1141656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:18:22.638138 1141656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:18:22.653787 1141656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:18:22.653859 1141656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:18:22.668355 1141656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:18:22.683810 1141656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:18:22.801296 1141656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:18:22.990924 1141656 docker.go:233] disabling docker service ...
	I0731 21:18:22.991008 1141656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:18:23.005257 1141656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:18:23.018877 1141656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:18:23.137506 1141656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:18:23.251806 1141656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:18:23.265800 1141656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:18:23.284895 1141656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:18:23.284973 1141656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:23.295159 1141656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:18:23.295238 1141656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:23.305300 1141656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:23.315584 1141656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:23.326196 1141656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:18:23.336960 1141656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:18:23.346957 1141656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:18:23.347034 1141656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:18:23.363272 1141656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:18:23.375202 1141656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:18:23.489418 1141656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:18:23.631350 1141656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:18:23.631442 1141656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:18:23.636258 1141656 start.go:563] Will wait 60s for crictl version
	I0731 21:18:23.636351 1141656 ssh_runner.go:195] Run: which crictl
	I0731 21:18:23.640185 1141656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:18:23.677789 1141656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:18:23.677877 1141656 ssh_runner.go:195] Run: crio --version
	I0731 21:18:23.707174 1141656 ssh_runner.go:195] Run: crio --version
	I0731 21:18:23.742965 1141656 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:18:23.744452 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetIP
	I0731 21:18:23.747926 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:23.748489 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:18:23.748531 1141656 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:18:23.748766 1141656 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:18:23.754424 1141656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:18:23.767668 1141656 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:18:23.767820 1141656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:18:23.767880 1141656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:18:23.800710 1141656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:18:23.800801 1141656 ssh_runner.go:195] Run: which lz4
	I0731 21:18:23.804738 1141656 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 21:18:23.808980 1141656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:18:23.809016 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:18:25.455940 1141656 crio.go:462] duration metric: took 1.651234438s to copy over tarball
	I0731 21:18:25.456039 1141656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:18:28.099863 1141656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.643792722s)
	I0731 21:18:28.099893 1141656 crio.go:469] duration metric: took 2.643917723s to extract the tarball
	I0731 21:18:28.099902 1141656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:18:28.142199 1141656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:18:28.185302 1141656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:18:28.185336 1141656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:18:28.185401 1141656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:18:28.185422 1141656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:18:28.185435 1141656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:18:28.185465 1141656 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:18:28.185476 1141656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:18:28.185489 1141656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:18:28.185556 1141656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:18:28.185573 1141656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:18:28.187251 1141656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:18:28.187260 1141656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:18:28.187269 1141656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:18:28.187276 1141656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:18:28.187336 1141656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:18:28.187338 1141656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:18:28.187361 1141656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:18:28.187382 1141656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:18:28.343002 1141656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:18:28.346783 1141656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:18:28.349456 1141656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:18:28.350269 1141656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:18:28.356818 1141656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:18:28.372501 1141656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:18:28.419499 1141656 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:18:28.419562 1141656 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:18:28.419616 1141656 ssh_runner.go:195] Run: which crictl
	I0731 21:18:28.460900 1141656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:18:28.495610 1141656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:18:28.495676 1141656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:18:28.495738 1141656 ssh_runner.go:195] Run: which crictl
	I0731 21:18:28.513282 1141656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:18:28.513332 1141656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:18:28.513399 1141656 ssh_runner.go:195] Run: which crictl
	I0731 21:18:28.563163 1141656 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:18:28.563212 1141656 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:18:28.563257 1141656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:18:28.563269 1141656 ssh_runner.go:195] Run: which crictl
	I0731 21:18:28.563293 1141656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:18:28.563380 1141656 ssh_runner.go:195] Run: which crictl
	I0731 21:18:28.563416 1141656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:18:28.563424 1141656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:18:28.563379 1141656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:18:28.563462 1141656 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:18:28.563500 1141656 ssh_runner.go:195] Run: which crictl
	I0731 21:18:28.563383 1141656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:18:28.563316 1141656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:18:28.563628 1141656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:18:28.563691 1141656 ssh_runner.go:195] Run: which crictl
	I0731 21:18:28.567689 1141656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:18:28.570743 1141656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:18:28.680063 1141656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:18:28.680138 1141656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:18:28.680192 1141656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:18:28.680203 1141656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:18:28.680260 1141656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:18:28.680265 1141656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:18:28.680310 1141656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:18:28.726042 1141656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:18:28.726104 1141656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:18:28.870419 1141656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:18:29.008419 1141656 cache_images.go:92] duration metric: took 823.06074ms to LoadCachedImages
	W0731 21:18:29.008539 1141656 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0731 21:18:29.008556 1141656 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.20.0 crio true true} ...
	I0731 21:18:29.008671 1141656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-202332 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:18:29.008738 1141656 ssh_runner.go:195] Run: crio config
	I0731 21:18:29.068460 1141656 cni.go:84] Creating CNI manager for ""
	I0731 21:18:29.068487 1141656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:18:29.068501 1141656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:18:29.068520 1141656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-202332 NodeName:kubernetes-upgrade-202332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:18:29.068654 1141656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-202332"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:18:29.068718 1141656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:18:29.080100 1141656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:18:29.080200 1141656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:18:29.090293 1141656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0731 21:18:29.109986 1141656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:18:29.128305 1141656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:18:29.145343 1141656 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0731 21:18:29.149927 1141656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:18:29.163294 1141656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:18:29.298180 1141656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:18:29.317467 1141656 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332 for IP: 192.168.39.10
	I0731 21:18:29.317504 1141656 certs.go:194] generating shared ca certs ...
	I0731 21:18:29.317530 1141656 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:29.317730 1141656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:18:29.317797 1141656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:18:29.317811 1141656 certs.go:256] generating profile certs ...
	I0731 21:18:29.317892 1141656 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/client.key
	I0731 21:18:29.317912 1141656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/client.crt with IP's: []
	I0731 21:18:29.492117 1141656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/client.crt ...
	I0731 21:18:29.492159 1141656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/client.crt: {Name:mk00e59233524176bd74e6baa2e4df6d39f752ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:29.492408 1141656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/client.key ...
	I0731 21:18:29.492433 1141656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/client.key: {Name:mke620f31392bdcc3cde137e7517513de54d86ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:29.492571 1141656 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.key.60a08cd8
	I0731 21:18:29.492592 1141656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.crt.60a08cd8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.10]
	I0731 21:18:29.661675 1141656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.crt.60a08cd8 ...
	I0731 21:18:29.661716 1141656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.crt.60a08cd8: {Name:mkc1b918bbd1d3d4df4ab219cfae98c00fc17cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:29.661924 1141656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.key.60a08cd8 ...
	I0731 21:18:29.661950 1141656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.key.60a08cd8: {Name:mk51e9d7cb11313ddc8e4b58bf3f74eb679c9b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:29.662075 1141656 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.crt.60a08cd8 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.crt
	I0731 21:18:29.662170 1141656 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.key.60a08cd8 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.key
	I0731 21:18:29.662223 1141656 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.key
	I0731 21:18:29.662241 1141656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.crt with IP's: []
	I0731 21:18:30.042797 1141656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.crt ...
	I0731 21:18:30.042842 1141656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.crt: {Name:mk86369195e73543dcb375ab80f695d4156ddbbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:30.083318 1141656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.key ...
	I0731 21:18:30.083371 1141656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.key: {Name:mk7aee14dce062dd2ac44b08d611634a7fd87749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:30.083701 1141656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:18:30.083758 1141656 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:18:30.083773 1141656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:18:30.083821 1141656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:18:30.083856 1141656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:18:30.083887 1141656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:18:30.083940 1141656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:18:30.084898 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:18:30.112805 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:18:30.137059 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:18:30.166191 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:18:30.197820 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 21:18:30.230150 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:18:30.255093 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:18:30.279456 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:18:30.303687 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:18:30.328719 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:18:30.353223 1141656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:18:30.378575 1141656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:18:30.397953 1141656 ssh_runner.go:195] Run: openssl version
	I0731 21:18:30.404275 1141656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:18:30.417511 1141656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:18:30.422698 1141656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:18:30.422782 1141656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:18:30.428887 1141656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:18:30.440378 1141656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:18:30.452319 1141656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:18:30.457052 1141656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:18:30.457129 1141656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:18:30.463506 1141656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:18:30.475766 1141656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:18:30.489034 1141656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:18:30.493810 1141656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:18:30.493881 1141656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:18:30.499826 1141656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:18:30.511468 1141656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:18:30.517360 1141656 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 21:18:30.517420 1141656 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:18:30.517529 1141656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:18:30.517602 1141656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:18:30.565891 1141656 cri.go:89] found id: ""
	I0731 21:18:30.565990 1141656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:18:30.577646 1141656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:18:30.592821 1141656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:18:30.603855 1141656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:18:30.603881 1141656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:18:30.603962 1141656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:18:30.614559 1141656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:18:30.614651 1141656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:18:30.626858 1141656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:18:30.637189 1141656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:18:30.637271 1141656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:18:30.648003 1141656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:18:30.658421 1141656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:18:30.658501 1141656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:18:30.672175 1141656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:18:30.685157 1141656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:18:30.685231 1141656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:18:30.697885 1141656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:18:30.997944 1141656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:20:29.081698 1141656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:20:29.081838 1141656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:20:29.083251 1141656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:20:29.083322 1141656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:20:29.083418 1141656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:20:29.083575 1141656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:20:29.083739 1141656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:20:29.083802 1141656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:20:29.086357 1141656 out.go:204]   - Generating certificates and keys ...
	I0731 21:20:29.086436 1141656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:20:29.086494 1141656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:20:29.086576 1141656 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 21:20:29.086646 1141656 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 21:20:29.086711 1141656 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 21:20:29.086779 1141656 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 21:20:29.086856 1141656 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 21:20:29.086992 1141656 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-202332 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I0731 21:20:29.087039 1141656 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 21:20:29.087187 1141656 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-202332 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	I0731 21:20:29.087258 1141656 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 21:20:29.087331 1141656 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 21:20:29.087392 1141656 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 21:20:29.087452 1141656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:20:29.087515 1141656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:20:29.087579 1141656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:20:29.087638 1141656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:20:29.087691 1141656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:20:29.087823 1141656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:20:29.087958 1141656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:20:29.088014 1141656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:20:29.088075 1141656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:20:29.089250 1141656 out.go:204]   - Booting up control plane ...
	I0731 21:20:29.089341 1141656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:20:29.089426 1141656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:20:29.089537 1141656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:20:29.089657 1141656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:20:29.089867 1141656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:20:29.089929 1141656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:20:29.090006 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:20:29.090253 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:20:29.090350 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:20:29.090534 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:20:29.090598 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:20:29.090801 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:20:29.090880 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:20:29.091038 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:20:29.091134 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:20:29.091390 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:20:29.091398 1141656 kubeadm.go:310] 
	I0731 21:20:29.091431 1141656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:20:29.091491 1141656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:20:29.091502 1141656 kubeadm.go:310] 
	I0731 21:20:29.091547 1141656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:20:29.091595 1141656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:20:29.091749 1141656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:20:29.091760 1141656 kubeadm.go:310] 
	I0731 21:20:29.091856 1141656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:20:29.091886 1141656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:20:29.091914 1141656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:20:29.091920 1141656 kubeadm.go:310] 
	I0731 21:20:29.092024 1141656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:20:29.092145 1141656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:20:29.092163 1141656 kubeadm.go:310] 
	I0731 21:20:29.092282 1141656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:20:29.092405 1141656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:20:29.092470 1141656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:20:29.092546 1141656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:20:29.092591 1141656 kubeadm.go:310] 
	W0731 21:20:29.092712 1141656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-202332 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-202332 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-202332 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-202332 localhost] and IPs [192.168.39.10 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:20:29.092760 1141656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:20:29.589739 1141656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:20:29.603458 1141656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:20:29.613458 1141656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:20:29.613487 1141656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:20:29.613538 1141656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:20:29.623784 1141656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:20:29.623848 1141656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:20:29.633474 1141656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:20:29.646480 1141656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:20:29.646555 1141656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:20:29.657666 1141656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:20:29.667023 1141656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:20:29.667104 1141656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:20:29.676810 1141656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:20:29.686171 1141656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:20:29.686256 1141656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:20:29.696277 1141656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:20:29.773309 1141656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:20:29.773405 1141656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:20:29.927019 1141656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:20:29.927165 1141656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:20:29.927323 1141656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:20:30.126437 1141656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:20:30.128221 1141656 out.go:204]   - Generating certificates and keys ...
	I0731 21:20:30.128344 1141656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:20:30.128431 1141656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:20:30.128534 1141656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:20:30.128613 1141656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:20:30.128702 1141656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:20:30.128781 1141656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:20:30.128869 1141656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:20:30.129227 1141656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:20:30.129636 1141656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:20:30.130104 1141656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:20:30.130301 1141656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:20:30.130379 1141656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:20:30.298967 1141656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:20:30.519337 1141656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:20:30.888125 1141656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:20:31.182437 1141656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:20:31.197277 1141656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:20:31.198833 1141656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:20:31.198948 1141656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:20:31.347761 1141656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:20:31.349631 1141656 out.go:204]   - Booting up control plane ...
	I0731 21:20:31.349777 1141656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:20:31.351529 1141656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:20:31.353485 1141656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:20:31.354561 1141656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:20:31.360484 1141656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:21:11.361761 1141656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:21:11.362259 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:11.362531 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:21:16.363266 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:16.363566 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:21:26.363902 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:26.364107 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:21:46.365474 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:46.365786 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:22:26.365146 1141656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:22:26.365318 1141656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:22:26.365327 1141656 kubeadm.go:310] 
	I0731 21:22:26.365377 1141656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:22:26.365444 1141656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:22:26.365457 1141656 kubeadm.go:310] 
	I0731 21:22:26.365500 1141656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:22:26.365548 1141656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:22:26.365658 1141656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:22:26.365670 1141656 kubeadm.go:310] 
	I0731 21:22:26.365754 1141656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:22:26.365828 1141656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:22:26.365899 1141656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:22:26.365912 1141656 kubeadm.go:310] 
	I0731 21:22:26.366052 1141656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:22:26.366173 1141656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:22:26.366185 1141656 kubeadm.go:310] 
	I0731 21:22:26.366323 1141656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:22:26.366448 1141656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:22:26.366541 1141656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:22:26.366614 1141656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:22:26.366620 1141656 kubeadm.go:310] 
	I0731 21:22:26.367144 1141656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:22:26.367268 1141656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:22:26.367351 1141656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:22:26.367441 1141656 kubeadm.go:394] duration metric: took 3m55.850024371s to StartCluster
	I0731 21:22:26.367488 1141656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:22:26.367545 1141656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:22:26.401113 1141656 cri.go:89] found id: ""
	I0731 21:22:26.401151 1141656 logs.go:276] 0 containers: []
	W0731 21:22:26.401163 1141656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:22:26.401171 1141656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:22:26.401259 1141656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:22:26.435665 1141656 cri.go:89] found id: ""
	I0731 21:22:26.435702 1141656 logs.go:276] 0 containers: []
	W0731 21:22:26.435714 1141656 logs.go:278] No container was found matching "etcd"
	I0731 21:22:26.435724 1141656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:22:26.435792 1141656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:22:26.468398 1141656 cri.go:89] found id: ""
	I0731 21:22:26.468428 1141656 logs.go:276] 0 containers: []
	W0731 21:22:26.468436 1141656 logs.go:278] No container was found matching "coredns"
	I0731 21:22:26.468444 1141656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:22:26.468510 1141656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:22:26.501764 1141656 cri.go:89] found id: ""
	I0731 21:22:26.501802 1141656 logs.go:276] 0 containers: []
	W0731 21:22:26.501815 1141656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:22:26.501824 1141656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:22:26.501893 1141656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:22:26.533866 1141656 cri.go:89] found id: ""
	I0731 21:22:26.533905 1141656 logs.go:276] 0 containers: []
	W0731 21:22:26.533917 1141656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:22:26.533926 1141656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:22:26.533992 1141656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:22:26.569406 1141656 cri.go:89] found id: ""
	I0731 21:22:26.569440 1141656 logs.go:276] 0 containers: []
	W0731 21:22:26.569462 1141656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:22:26.569471 1141656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:22:26.569545 1141656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:22:26.602064 1141656 cri.go:89] found id: ""
	I0731 21:22:26.602094 1141656 logs.go:276] 0 containers: []
	W0731 21:22:26.602102 1141656 logs.go:278] No container was found matching "kindnet"
	I0731 21:22:26.602114 1141656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:22:26.602128 1141656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:22:26.654067 1141656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:22:26.654112 1141656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:22:26.669873 1141656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:22:26.669905 1141656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:22:26.819128 1141656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:22:26.819152 1141656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:22:26.819167 1141656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:22:26.913821 1141656 logs.go:123] Gathering logs for container status ...
	I0731 21:22:26.913865 1141656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 21:22:26.950368 1141656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:22:26.950428 1141656 out.go:239] * 
	* 
	W0731 21:22:26.950490 1141656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:22:26.950513 1141656 out.go:239] * 
	* 
	W0731 21:22:26.951349 1141656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:22:26.954578 1141656 out.go:177] 
	W0731 21:22:26.955623 1141656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:22:26.955677 1141656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:22:26.955719 1141656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:22:26.957726 1141656 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-202332
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-202332: (6.292599562s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-202332 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-202332 status --format={{.Host}}: exit status 7 (66.409784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.466520775s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-202332 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.955132ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-202332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-202332
	    minikube start -p kubernetes-upgrade-202332 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2023322 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-202332 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-202332 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.68672967s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-31 21:23:21.675123017 +0000 UTC m=+4442.833554149
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-202332 -n kubernetes-upgrade-202332
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-202332 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-202332 logs -n 25: (1.280456601s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-081034 sudo                           | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-081034                                | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	| start   | -p kubernetes-upgrade-202332                          | kubernetes-upgrade-202332 | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p pause-355751                                       | pause-355751              | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	| start   | -p stopped-upgrade-140201                             | minikube                  | jenkins | v1.26.0 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:19 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-425308 ssh                               | cert-options-425308       | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-425308 -- sudo                        | cert-options-425308       | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-425308                                | cert-options-425308       | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	| start   | -p old-k8s-version-275462                             | old-k8s-version-275462    | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-140201 stop                           | minikube                  | jenkins | v1.26.0 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	| start   | -p stopped-upgrade-140201                             | stopped-upgrade-140201    | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-140201                             | stopped-upgrade-140201    | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	| start   | -p no-preload-018891 --memory=2200                    | no-preload-018891         | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:21 UTC |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	| start   | -p cert-expiration-238338                             | cert-expiration-238338    | jenkins | v1.33.1 | 31 Jul 24 21:20 UTC | 31 Jul 24 21:21 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-238338                             | cert-expiration-238338    | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	| start   | -p embed-certs-563652                                 | embed-certs-563652        | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:22 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-018891            | no-preload-018891         | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-018891                                  | no-preload-018891         | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-563652           | embed-certs-563652        | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-563652                                 | embed-certs-563652        | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-202332                          | kubernetes-upgrade-202332 | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-202332                          | kubernetes-upgrade-202332 | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                          | kubernetes-upgrade-202332 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                          | kubernetes-upgrade-202332 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                   |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275462       | old-k8s-version-275462    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:23:08
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:23:08.028625 1145804 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:23:08.028879 1145804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:23:08.028890 1145804 out.go:304] Setting ErrFile to fd 2...
	I0731 21:23:08.028894 1145804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:23:08.029103 1145804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:23:08.029674 1145804 out.go:298] Setting JSON to false
	I0731 21:23:08.030831 1145804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18339,"bootTime":1722442649,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:23:08.030905 1145804 start.go:139] virtualization: kvm guest
	I0731 21:23:08.032896 1145804 out.go:177] * [kubernetes-upgrade-202332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:23:08.034310 1145804 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:23:08.034352 1145804 notify.go:220] Checking for updates...
	I0731 21:23:08.036635 1145804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:23:08.037828 1145804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:23:08.038968 1145804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:23:08.040131 1145804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:23:08.041202 1145804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:23:08.042692 1145804 config.go:182] Loaded profile config "kubernetes-upgrade-202332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:23:08.043106 1145804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:23:08.043177 1145804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:23:08.058524 1145804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0731 21:23:08.059019 1145804 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:23:08.059600 1145804 main.go:141] libmachine: Using API Version  1
	I0731 21:23:08.059627 1145804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:23:08.060021 1145804 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:23:08.060278 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:08.060582 1145804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:23:08.060950 1145804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:23:08.060999 1145804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:23:08.076512 1145804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42129
	I0731 21:23:08.077016 1145804 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:23:08.077739 1145804 main.go:141] libmachine: Using API Version  1
	I0731 21:23:08.077797 1145804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:23:08.078169 1145804 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:23:08.078407 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:08.117111 1145804 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:23:08.118531 1145804 start.go:297] selected driver: kvm2
	I0731 21:23:08.118554 1145804 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:23:08.118714 1145804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:23:08.119645 1145804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:23:08.119737 1145804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:23:08.135488 1145804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:23:08.136013 1145804 cni.go:84] Creating CNI manager for ""
	I0731 21:23:08.136033 1145804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:23:08.136076 1145804 start.go:340] cluster config:
	{Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-202332 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:23:08.136218 1145804 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:23:08.137883 1145804 out.go:177] * Starting "kubernetes-upgrade-202332" primary control-plane node in "kubernetes-upgrade-202332" cluster
	I0731 21:23:08.139173 1145804 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:23:08.139235 1145804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:23:08.139251 1145804 cache.go:56] Caching tarball of preloaded images
	I0731 21:23:08.139364 1145804 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:23:08.139379 1145804 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 21:23:08.139499 1145804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/config.json ...
	I0731 21:23:08.139769 1145804 start.go:360] acquireMachinesLock for kubernetes-upgrade-202332: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:23:08.139827 1145804 start.go:364] duration metric: took 30.997µs to acquireMachinesLock for "kubernetes-upgrade-202332"
	I0731 21:23:08.139842 1145804 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:23:08.139848 1145804 fix.go:54] fixHost starting: 
	I0731 21:23:08.140161 1145804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:23:08.140200 1145804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:23:08.155531 1145804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42097
	I0731 21:23:08.156052 1145804 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:23:08.156627 1145804 main.go:141] libmachine: Using API Version  1
	I0731 21:23:08.156668 1145804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:23:08.157182 1145804 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:23:08.157446 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:08.157614 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetState
	I0731 21:23:08.159180 1145804 fix.go:112] recreateIfNeeded on kubernetes-upgrade-202332: state=Running err=<nil>
	W0731 21:23:08.159201 1145804 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:23:08.160912 1145804 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-202332" VM ...
	I0731 21:23:08.162212 1145804 machine.go:94] provisionDockerMachine start ...
	I0731 21:23:08.162261 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:08.162522 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:08.164966 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.165427 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:08.165459 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.165585 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:08.165797 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.165986 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.166137 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:08.166306 1145804 main.go:141] libmachine: Using SSH client type: native
	I0731 21:23:08.166536 1145804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:23:08.166561 1145804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:23:08.280351 1145804 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-202332
	
	I0731 21:23:08.280385 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetMachineName
	I0731 21:23:08.280695 1145804 buildroot.go:166] provisioning hostname "kubernetes-upgrade-202332"
	I0731 21:23:08.280718 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetMachineName
	I0731 21:23:08.280951 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:08.283701 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.284198 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:08.284230 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.284378 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:08.284626 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.284807 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.284944 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:08.285103 1145804 main.go:141] libmachine: Using SSH client type: native
	I0731 21:23:08.285289 1145804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:23:08.285303 1145804 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-202332 && echo "kubernetes-upgrade-202332" | sudo tee /etc/hostname
	I0731 21:23:08.418903 1145804 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-202332
	
	I0731 21:23:08.418951 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:08.422007 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.422405 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:08.422436 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.422683 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:08.422904 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.423086 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.423257 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:08.423417 1145804 main.go:141] libmachine: Using SSH client type: native
	I0731 21:23:08.423682 1145804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:23:08.423711 1145804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-202332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-202332/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-202332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:23:08.545096 1145804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:23:08.545133 1145804 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:23:08.545182 1145804 buildroot.go:174] setting up certificates
	I0731 21:23:08.545194 1145804 provision.go:84] configureAuth start
	I0731 21:23:08.545207 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetMachineName
	I0731 21:23:08.545525 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetIP
	I0731 21:23:08.548219 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.548505 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:08.548535 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.548667 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:08.551045 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.551405 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:08.551442 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.551576 1145804 provision.go:143] copyHostCerts
	I0731 21:23:08.551643 1145804 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:23:08.551657 1145804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:23:08.551734 1145804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:23:08.551877 1145804 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:23:08.551890 1145804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:23:08.551927 1145804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:23:08.552023 1145804 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:23:08.552033 1145804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:23:08.552065 1145804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:23:08.552175 1145804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-202332 san=[127.0.0.1 192.168.39.10 kubernetes-upgrade-202332 localhost minikube]
	I0731 21:23:08.803117 1145804 provision.go:177] copyRemoteCerts
	I0731 21:23:08.803191 1145804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:23:08.803222 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:08.805620 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.805926 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:08.805960 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.806109 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:08.806323 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.806468 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:08.806596 1145804 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa Username:docker}
	I0731 21:23:08.894629 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:23:08.921681 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 21:23:08.953155 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:23:08.979296 1145804 provision.go:87] duration metric: took 434.085196ms to configureAuth
	I0731 21:23:08.979329 1145804 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:23:08.979503 1145804 config.go:182] Loaded profile config "kubernetes-upgrade-202332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:23:08.979595 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:08.982035 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.982422 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:08.982457 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:08.982601 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:08.982841 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.983071 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:08.983217 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:08.983403 1145804 main.go:141] libmachine: Using SSH client type: native
	I0731 21:23:08.983623 1145804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:23:08.983646 1145804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:23:09.912135 1145804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:23:09.912171 1145804 machine.go:97] duration metric: took 1.749944258s to provisionDockerMachine
	I0731 21:23:09.912185 1145804 start.go:293] postStartSetup for "kubernetes-upgrade-202332" (driver="kvm2")
	I0731 21:23:09.912198 1145804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:23:09.912221 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:09.912571 1145804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:23:09.912607 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:09.915494 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:09.915897 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:09.915927 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:09.916156 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:09.916343 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:09.916506 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:09.916675 1145804 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa Username:docker}
	I0731 21:23:10.006568 1145804 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:23:10.010557 1145804 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:23:10.010590 1145804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:23:10.010678 1145804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:23:10.010777 1145804 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:23:10.010893 1145804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:23:10.021021 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:23:10.045082 1145804 start.go:296] duration metric: took 132.880713ms for postStartSetup
	I0731 21:23:10.045124 1145804 fix.go:56] duration metric: took 1.905276043s for fixHost
	I0731 21:23:10.045147 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:10.047964 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:10.048364 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:10.048398 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:10.048578 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:10.048847 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:10.049026 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:10.049161 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:10.049332 1145804 main.go:141] libmachine: Using SSH client type: native
	I0731 21:23:10.049559 1145804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0731 21:23:10.049574 1145804 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:23:10.164939 1145804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722460990.144135956
	
	I0731 21:23:10.164967 1145804 fix.go:216] guest clock: 1722460990.144135956
	I0731 21:23:10.164975 1145804 fix.go:229] Guest: 2024-07-31 21:23:10.144135956 +0000 UTC Remote: 2024-07-31 21:23:10.045127755 +0000 UTC m=+2.053411646 (delta=99.008201ms)
	I0731 21:23:10.164999 1145804 fix.go:200] guest clock delta is within tolerance: 99.008201ms
	I0731 21:23:10.165007 1145804 start.go:83] releasing machines lock for "kubernetes-upgrade-202332", held for 2.025169471s
	I0731 21:23:10.165033 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:10.165314 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetIP
	I0731 21:23:10.168420 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:10.168840 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:10.168877 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:10.169058 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:10.169613 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:10.169788 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .DriverName
	I0731 21:23:10.169882 1145804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:23:10.169930 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:10.169977 1145804 ssh_runner.go:195] Run: cat /version.json
	I0731 21:23:10.170002 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHHostname
	I0731 21:23:10.172494 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:10.172821 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:10.172847 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:10.172918 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:10.173030 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:10.173243 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:10.173424 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:10.173438 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:10.173453 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:10.173618 1145804 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa Username:docker}
	I0731 21:23:10.173634 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHPort
	I0731 21:23:10.173786 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHKeyPath
	I0731 21:23:10.173895 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetSSHUsername
	I0731 21:23:10.174037 1145804 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kubernetes-upgrade-202332/id_rsa Username:docker}
	I0731 21:23:10.276980 1145804 ssh_runner.go:195] Run: systemctl --version
	I0731 21:23:10.282767 1145804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:23:10.471606 1145804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:23:10.533998 1145804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:23:10.534087 1145804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:23:10.564802 1145804 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 21:23:10.564845 1145804 start.go:495] detecting cgroup driver to use...
	I0731 21:23:10.564955 1145804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:23:10.599527 1145804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:23:10.671558 1145804 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:23:10.671634 1145804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:23:10.700075 1145804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:23:10.724808 1145804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:23:10.949228 1145804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:23:11.113763 1145804 docker.go:233] disabling docker service ...
	I0731 21:23:11.113831 1145804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:23:11.131877 1145804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:23:11.147898 1145804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:23:11.306534 1145804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:23:11.491838 1145804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:23:11.511619 1145804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:23:11.533625 1145804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:23:11.533688 1145804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:23:11.547853 1145804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:23:11.547924 1145804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:23:11.561137 1145804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:23:11.574030 1145804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:23:11.590870 1145804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:23:11.604632 1145804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:23:11.616485 1145804 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:23:11.630618 1145804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:23:11.643439 1145804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:23:11.656016 1145804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:23:11.667065 1145804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:23:11.837974 1145804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:23:12.209934 1145804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:23:12.210018 1145804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:23:12.214657 1145804 start.go:563] Will wait 60s for crictl version
	I0731 21:23:12.214719 1145804 ssh_runner.go:195] Run: which crictl
	I0731 21:23:12.218373 1145804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:23:12.265652 1145804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:23:12.265756 1145804 ssh_runner.go:195] Run: crio --version
	I0731 21:23:12.296107 1145804 ssh_runner.go:195] Run: crio --version
	I0731 21:23:12.330613 1145804 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:23:12.219553 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:23:12.219779 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:23:12.219792 1142911 kubeadm.go:310] 
	I0731 21:23:12.219863 1142911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:23:12.219937 1142911 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:23:12.219946 1142911 kubeadm.go:310] 
	I0731 21:23:12.219987 1142911 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:23:12.220048 1142911 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:23:12.220178 1142911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:23:12.220195 1142911 kubeadm.go:310] 
	I0731 21:23:12.220338 1142911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:23:12.220388 1142911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:23:12.220435 1142911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:23:12.220445 1142911 kubeadm.go:310] 
	I0731 21:23:12.220607 1142911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:23:12.220737 1142911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:23:12.220752 1142911 kubeadm.go:310] 
	I0731 21:23:12.220891 1142911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:23:12.220991 1142911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:23:12.221112 1142911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:23:12.221220 1142911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:23:12.221231 1142911 kubeadm.go:310] 
	I0731 21:23:12.221766 1142911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:23:12.221873 1142911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:23:12.221959 1142911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:23:12.222045 1142911 kubeadm.go:394] duration metric: took 3m55.474854938s to StartCluster
	I0731 21:23:12.222095 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:23:12.222170 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:23:12.268456 1142911 cri.go:89] found id: ""
	I0731 21:23:12.268488 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.268499 1142911 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:23:12.268507 1142911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:23:12.268575 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:23:12.311331 1142911 cri.go:89] found id: ""
	I0731 21:23:12.311361 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.311370 1142911 logs.go:278] No container was found matching "etcd"
	I0731 21:23:12.311377 1142911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:23:12.311443 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:23:12.348062 1142911 cri.go:89] found id: ""
	I0731 21:23:12.348123 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.348135 1142911 logs.go:278] No container was found matching "coredns"
	I0731 21:23:12.348144 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:23:12.348219 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:23:12.382136 1142911 cri.go:89] found id: ""
	I0731 21:23:12.382171 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.382183 1142911 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:23:12.382192 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:23:12.382278 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:23:12.420889 1142911 cri.go:89] found id: ""
	I0731 21:23:12.420917 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.420929 1142911 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:23:12.420937 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:23:12.421000 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:23:12.462621 1142911 cri.go:89] found id: ""
	I0731 21:23:12.462652 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.462662 1142911 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:23:12.462669 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:23:12.462736 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:23:12.500005 1142911 cri.go:89] found id: ""
	I0731 21:23:12.500040 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.500052 1142911 logs.go:278] No container was found matching "kindnet"
	I0731 21:23:12.500065 1142911 logs.go:123] Gathering logs for kubelet ...
	I0731 21:23:12.500080 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:23:12.557573 1142911 logs.go:123] Gathering logs for dmesg ...
	I0731 21:23:12.557615 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:23:12.571612 1142911 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:23:12.571649 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:23:12.721857 1142911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:23:12.721888 1142911 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:23:12.721906 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:23:12.823082 1142911 logs.go:123] Gathering logs for container status ...
	I0731 21:23:12.823128 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 21:23:12.864083 1142911 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:23:12.864154 1142911 out.go:239] * 
	W0731 21:23:12.864226 1142911 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:23:12.864257 1142911 out.go:239] * 
	W0731 21:23:12.865104 1142911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:23:12.868234 1142911 out.go:177] 
	W0731 21:23:12.869488 1142911 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:23:12.869539 1142911 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:23:12.869560 1142911 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:23:12.871330 1142911 out.go:177] 
	I0731 21:23:12.332034 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) Calling .GetIP
	I0731 21:23:12.334774 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:12.335289 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:c8:0d", ip: ""} in network mk-kubernetes-upgrade-202332: {Iface:virbr2 ExpiryTime:2024-07-31 22:18:11 +0000 UTC Type:0 Mac:52:54:00:a5:c8:0d Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:kubernetes-upgrade-202332 Clientid:01:52:54:00:a5:c8:0d}
	I0731 21:23:12.335336 1145804 main.go:141] libmachine: (kubernetes-upgrade-202332) DBG | domain kubernetes-upgrade-202332 has defined IP address 192.168.39.10 and MAC address 52:54:00:a5:c8:0d in network mk-kubernetes-upgrade-202332
	I0731 21:23:12.335597 1145804 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:23:12.339819 1145804 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:23:12.339937 1145804 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:23:12.339983 1145804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:23:12.382909 1145804 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:23:12.382931 1145804 crio.go:433] Images already preloaded, skipping extraction
	I0731 21:23:12.382980 1145804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:23:12.599397 1145804 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:23:12.599429 1145804 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:23:12.599442 1145804 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:23:12.599606 1145804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-202332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:23:12.599702 1145804 ssh_runner.go:195] Run: crio config
	I0731 21:23:12.708955 1145804 cni.go:84] Creating CNI manager for ""
	I0731 21:23:12.708984 1145804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:23:12.708995 1145804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:23:12.709026 1145804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-202332 NodeName:kubernetes-upgrade-202332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:23:12.709264 1145804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-202332"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:23:12.709356 1145804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:23:12.724281 1145804 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:23:12.724351 1145804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:23:12.734314 1145804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0731 21:23:12.754329 1145804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:23:12.772095 1145804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0731 21:23:12.788263 1145804 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0731 21:23:12.792212 1145804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:23:12.924547 1145804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:23:12.945545 1145804 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332 for IP: 192.168.39.10
	I0731 21:23:12.945576 1145804 certs.go:194] generating shared ca certs ...
	I0731 21:23:12.945597 1145804 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:23:12.945774 1145804 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:23:12.945837 1145804 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:23:12.945852 1145804 certs.go:256] generating profile certs ...
	I0731 21:23:12.945965 1145804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/client.key
	I0731 21:23:12.946011 1145804 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.key.60a08cd8
	I0731 21:23:12.946045 1145804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.key
	I0731 21:23:12.946151 1145804 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:23:12.946179 1145804 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:23:12.946188 1145804 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:23:12.946211 1145804 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:23:12.946235 1145804 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:23:12.946257 1145804 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:23:12.946294 1145804 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:23:12.946899 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:23:12.973999 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:23:12.999542 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:23:13.032534 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:23:13.060539 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 21:23:13.086019 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:23:13.110900 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:23:13.134915 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kubernetes-upgrade-202332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:23:13.161696 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:23:13.187828 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:23:13.218355 1145804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:23:13.249138 1145804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:23:13.270805 1145804 ssh_runner.go:195] Run: openssl version
	I0731 21:23:13.277719 1145804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:23:13.289817 1145804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:23:13.294264 1145804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:23:13.294352 1145804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:23:13.300536 1145804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:23:13.310990 1145804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:23:13.322651 1145804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:23:13.328019 1145804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:23:13.328070 1145804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:23:13.333937 1145804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:23:13.344774 1145804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:23:13.357110 1145804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:23:13.363357 1145804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:23:13.363429 1145804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:23:13.371446 1145804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:23:13.383992 1145804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:23:13.388752 1145804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:23:13.394542 1145804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:23:13.402575 1145804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:23:13.408220 1145804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:23:13.414840 1145804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:23:13.422770 1145804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:23:13.430463 1145804 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:23:13.430577 1145804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:23:13.430634 1145804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:23:13.496030 1145804 cri.go:89] found id: "6005b157067dc7d1451e3da7766547bfd1ff4c4fc1e7e768f6203e927157bcab"
	I0731 21:23:13.496052 1145804 cri.go:89] found id: "02bc695aaa116fb684b3fa14620e2957abd3e074599d7c3cdc5c06adc77b6a39"
	I0731 21:23:13.496057 1145804 cri.go:89] found id: "471b4f3eff73db670ab86a529f1314d9d8eaaa560911c06370f4cc386c98eca7"
	I0731 21:23:13.496061 1145804 cri.go:89] found id: "2dff9356fb72bb2a30dbb9451ccd19d940a8788eea1fe9f92649cfa5550b5333"
	I0731 21:23:13.496064 1145804 cri.go:89] found id: ""
	I0731 21:23:13.496145 1145804 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.394623529Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:192a5bccd7890ec09d4774d64c033f718b588015a853e2921cc8a09509e51a9f,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-202332,Uid:c8af05815f25b4600afbba5444ce0efe,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722460992504531211,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.10:2379,kubernetes.io/config.hash: c8af05815f25b4600afbba5444ce0efe,kubernetes.io/config.seen: 2024-07-31T21:22:59.190804989Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b729f8a49451add7126f1bb465bc147e3126eaec97
0293d971c1006797fc2dd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-202332,Uid:24a6153fc83fa0c646ce8900b186f6b0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722460992433597977,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 24a6153fc83fa0c646ce8900b186f6b0,kubernetes.io/config.seen: 2024-07-31T21:22:59.146298243Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a8ee6ee8c7cf38fcf84c0e07e5cd1772ece34e6ec5ed83e33cba031d36cd11a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-202332,Uid:14fd33a614a61771e0d56f1e9ad95c4e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722460992416545820,Labels:map[string]string{component: kube-apiserver,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.10:8443,kubernetes.io/config.hash: 14fd33a614a61771e0d56f1e9ad95c4e,kubernetes.io/config.seen: 2024-07-31T21:22:59.146292772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a1baf3bff29f3aa10a6733de22d918d4bd9786dabfc0f194ec866628268c010d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-202332,Uid:c07e7c05b68bca0b36b570712f63a919,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722460992414171470,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68
bca0b36b570712f63a919,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c07e7c05b68bca0b36b570712f63a919,kubernetes.io/config.seen: 2024-07-31T21:22:59.146297118Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1f7ee769562ef17a766e2a80d2ceb0dffc30a7e84de7cdfce261fe93fa3a5a92,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-202332,Uid:c8af05815f25b4600afbba5444ce0efe,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722460990359881512,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.10:2379,kubernetes.io/config.hash: c8af05815f25b4600afbba5444ce0efe,kubernetes.io/config.seen: 2024-07-31T21:22:59.190804989Z,kubernetes.io/config.source: file,}
,RuntimeHandler:,},&PodSandbox{Id:c0cc20699bdad70c95d5711825b8911cfbdbed37f59a6e352683695a67325c5a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-202332,Uid:14fd33a614a61771e0d56f1e9ad95c4e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722460990357573486,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.10:8443,kubernetes.io/config.hash: 14fd33a614a61771e0d56f1e9ad95c4e,kubernetes.io/config.seen: 2024-07-31T21:22:59.146292772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c6bac35fd90f399e37ab6cc70c983b5d512bceadd63d19c028615cbf453cd090,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-202332,Uid:c07
e7c05b68bca0b36b570712f63a919,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722460990356641564,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c07e7c05b68bca0b36b570712f63a919,kubernetes.io/config.seen: 2024-07-31T21:22:59.146297118Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:064118c5b33e5939249210b9041ccc269db6c1a2a052f4749d335bc528be360e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-202332,Uid:24a6153fc83fa0c646ce8900b186f6b0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722460990343101002,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgr
ade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 24a6153fc83fa0c646ce8900b186f6b0,kubernetes.io/config.seen: 2024-07-31T21:22:59.146298243Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=738de545-ceb4-4985-a712-e61d87d60a89 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.395393563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecc6380b-8e66-4389-9fd7-b326509933be name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.395460389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecc6380b-8e66-4389-9fd7-b326509933be name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.395631699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e062819db36ab97edd58de150d140a8f622684cbde8e1837d03d949c5ee3a4,PodSandboxId:0b729f8a49451add7126f1bb465bc147e3126eaec970293d971c1006797fc2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722460995762119467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae014df9a2e493b38960b6c6a4f4b82852f8aba3d745799e9e33c93650fda19e,PodSandboxId:192a5bccd7890ec09d4774d64c033f718b588015a853e2921cc8a09509e51a9f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722460995748175993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e62315b2b3d38a8fe08a86ebc270b6c155986eb5d75723071f2994f62a93df4,PodSandboxId:5a8ee6ee8c7cf38fcf84c0e07e5cd1772ece34e6ec5ed83e33cba031d36cd11a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722460995739580000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9be7b349f6e1ad1c17a29397ce40adb21e1ab7b694e5ecf66c3b221aab4bd85,PodSandboxId:a1baf3bff29f3aa10a6733de22d918d4bd9786dabfc0f194ec866628268c010d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722460995725635135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6005b157067dc7d1451e3da7766547bfd1ff4c4fc1e7e768f6203e927157bcab,PodSandboxId:1f7ee769562ef17a766e2a80d2ceb0dffc30a7e84de7cdfce261fe93fa3a5a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722460990615626216,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:471b4f3eff73db670ab86a529f1314d9d8eaaa560911c06370f4cc386c98eca7,PodSandboxId:c6bac35fd90f399e37ab6cc70c983b5d512bceadd63d19c028615cbf453cd090,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722460990572487019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bc695aaa116fb684b3fa14620e2957abd3e074599d7c3cdc5c06adc77b6a39,PodSandboxId:c0cc20699bdad70c95d5711825b8911cfbdbed37f59a6e352683695a67325c5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722460990595550060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dff9356fb72bb2a30dbb9451ccd19d940a8788eea1fe9f92649cfa5550b5333,PodSandboxId:064118c5b33e5939249210b9041ccc269db6c1a2a052f4749d335bc528be360e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722460990471561420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecc6380b-8e66-4389-9fd7-b326509933be name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.414901476Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00fa67a5-534b-4b6f-97ea-2dd7edc16645 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.414977423Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00fa67a5-534b-4b6f-97ea-2dd7edc16645 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.416049846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36e66c6e-8998-4f20-89ed-f0a57a2ad499 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.416540993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722461002416512433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36e66c6e-8998-4f20-89ed-f0a57a2ad499 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.417153401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64311608-4477-458b-ad5a-e858aefd0acb name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.417218150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64311608-4477-458b-ad5a-e858aefd0acb name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.417440642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e062819db36ab97edd58de150d140a8f622684cbde8e1837d03d949c5ee3a4,PodSandboxId:0b729f8a49451add7126f1bb465bc147e3126eaec970293d971c1006797fc2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722460995762119467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae014df9a2e493b38960b6c6a4f4b82852f8aba3d745799e9e33c93650fda19e,PodSandboxId:192a5bccd7890ec09d4774d64c033f718b588015a853e2921cc8a09509e51a9f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722460995748175993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e62315b2b3d38a8fe08a86ebc270b6c155986eb5d75723071f2994f62a93df4,PodSandboxId:5a8ee6ee8c7cf38fcf84c0e07e5cd1772ece34e6ec5ed83e33cba031d36cd11a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722460995739580000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9be7b349f6e1ad1c17a29397ce40adb21e1ab7b694e5ecf66c3b221aab4bd85,PodSandboxId:a1baf3bff29f3aa10a6733de22d918d4bd9786dabfc0f194ec866628268c010d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722460995725635135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6005b157067dc7d1451e3da7766547bfd1ff4c4fc1e7e768f6203e927157bcab,PodSandboxId:1f7ee769562ef17a766e2a80d2ceb0dffc30a7e84de7cdfce261fe93fa3a5a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722460990615626216,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:471b4f3eff73db670ab86a529f1314d9d8eaaa560911c06370f4cc386c98eca7,PodSandboxId:c6bac35fd90f399e37ab6cc70c983b5d512bceadd63d19c028615cbf453cd090,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722460990572487019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bc695aaa116fb684b3fa14620e2957abd3e074599d7c3cdc5c06adc77b6a39,PodSandboxId:c0cc20699bdad70c95d5711825b8911cfbdbed37f59a6e352683695a67325c5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722460990595550060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dff9356fb72bb2a30dbb9451ccd19d940a8788eea1fe9f92649cfa5550b5333,PodSandboxId:064118c5b33e5939249210b9041ccc269db6c1a2a052f4749d335bc528be360e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722460990471561420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64311608-4477-458b-ad5a-e858aefd0acb name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.466152030Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42e49d3e-e3ea-4720-b555-4f3cd9ece342 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.466227400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42e49d3e-e3ea-4720-b555-4f3cd9ece342 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.467220274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f99a8e74-99e6-41e1-aa26-ead2cf937213 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.467753453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722461002467729799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f99a8e74-99e6-41e1-aa26-ead2cf937213 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.468241668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c00bb3d6-1c41-48e2-8ce1-9bd1780f6239 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.468496485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c00bb3d6-1c41-48e2-8ce1-9bd1780f6239 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.468887673Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e062819db36ab97edd58de150d140a8f622684cbde8e1837d03d949c5ee3a4,PodSandboxId:0b729f8a49451add7126f1bb465bc147e3126eaec970293d971c1006797fc2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722460995762119467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae014df9a2e493b38960b6c6a4f4b82852f8aba3d745799e9e33c93650fda19e,PodSandboxId:192a5bccd7890ec09d4774d64c033f718b588015a853e2921cc8a09509e51a9f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722460995748175993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e62315b2b3d38a8fe08a86ebc270b6c155986eb5d75723071f2994f62a93df4,PodSandboxId:5a8ee6ee8c7cf38fcf84c0e07e5cd1772ece34e6ec5ed83e33cba031d36cd11a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722460995739580000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9be7b349f6e1ad1c17a29397ce40adb21e1ab7b694e5ecf66c3b221aab4bd85,PodSandboxId:a1baf3bff29f3aa10a6733de22d918d4bd9786dabfc0f194ec866628268c010d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722460995725635135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6005b157067dc7d1451e3da7766547bfd1ff4c4fc1e7e768f6203e927157bcab,PodSandboxId:1f7ee769562ef17a766e2a80d2ceb0dffc30a7e84de7cdfce261fe93fa3a5a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722460990615626216,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:471b4f3eff73db670ab86a529f1314d9d8eaaa560911c06370f4cc386c98eca7,PodSandboxId:c6bac35fd90f399e37ab6cc70c983b5d512bceadd63d19c028615cbf453cd090,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722460990572487019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bc695aaa116fb684b3fa14620e2957abd3e074599d7c3cdc5c06adc77b6a39,PodSandboxId:c0cc20699bdad70c95d5711825b8911cfbdbed37f59a6e352683695a67325c5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722460990595550060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dff9356fb72bb2a30dbb9451ccd19d940a8788eea1fe9f92649cfa5550b5333,PodSandboxId:064118c5b33e5939249210b9041ccc269db6c1a2a052f4749d335bc528be360e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722460990471561420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c00bb3d6-1c41-48e2-8ce1-9bd1780f6239 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.533555565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32d2ea48-4f4c-4770-95f4-fe432d429909 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.533674792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32d2ea48-4f4c-4770-95f4-fe432d429909 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.535097631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8924b352-c09f-46c1-ac21-e12db8a73688 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.536427703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722461002536393148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8924b352-c09f-46c1-ac21-e12db8a73688 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.538795607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c39b58f7-8358-40bc-8fbf-f6ca358c02f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.538855657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c39b58f7-8358-40bc-8fbf-f6ca358c02f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:23:22 kubernetes-upgrade-202332 crio[1890]: time="2024-07-31 21:23:22.539065978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e062819db36ab97edd58de150d140a8f622684cbde8e1837d03d949c5ee3a4,PodSandboxId:0b729f8a49451add7126f1bb465bc147e3126eaec970293d971c1006797fc2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722460995762119467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae014df9a2e493b38960b6c6a4f4b82852f8aba3d745799e9e33c93650fda19e,PodSandboxId:192a5bccd7890ec09d4774d64c033f718b588015a853e2921cc8a09509e51a9f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722460995748175993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e62315b2b3d38a8fe08a86ebc270b6c155986eb5d75723071f2994f62a93df4,PodSandboxId:5a8ee6ee8c7cf38fcf84c0e07e5cd1772ece34e6ec5ed83e33cba031d36cd11a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722460995739580000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9be7b349f6e1ad1c17a29397ce40adb21e1ab7b694e5ecf66c3b221aab4bd85,PodSandboxId:a1baf3bff29f3aa10a6733de22d918d4bd9786dabfc0f194ec866628268c010d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722460995725635135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6005b157067dc7d1451e3da7766547bfd1ff4c4fc1e7e768f6203e927157bcab,PodSandboxId:1f7ee769562ef17a766e2a80d2ceb0dffc30a7e84de7cdfce261fe93fa3a5a92,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722460990615626216,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8af05815f25b4600afbba5444ce0efe,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:471b4f3eff73db670ab86a529f1314d9d8eaaa560911c06370f4cc386c98eca7,PodSandboxId:c6bac35fd90f399e37ab6cc70c983b5d512bceadd63d19c028615cbf453cd090,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722460990572487019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07e7c05b68bca0b36b570712f63a919,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02bc695aaa116fb684b3fa14620e2957abd3e074599d7c3cdc5c06adc77b6a39,PodSandboxId:c0cc20699bdad70c95d5711825b8911cfbdbed37f59a6e352683695a67325c5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722460990595550060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fd33a614a61771e0d56f1e9ad95c4e,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dff9356fb72bb2a30dbb9451ccd19d940a8788eea1fe9f92649cfa5550b5333,PodSandboxId:064118c5b33e5939249210b9041ccc269db6c1a2a052f4749d335bc528be360e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722460990471561420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-202332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24a6153fc83fa0c646ce8900b186f6b0,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c39b58f7-8358-40bc-8fbf-f6ca358c02f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1e062819db36       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   6 seconds ago       Running             kube-scheduler            2                   0b729f8a49451       kube-scheduler-kubernetes-upgrade-202332
	ae014df9a2e49       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   6 seconds ago       Running             etcd                      2                   192a5bccd7890       etcd-kubernetes-upgrade-202332
	8e62315b2b3d3       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   6 seconds ago       Running             kube-apiserver            2                   5a8ee6ee8c7cf       kube-apiserver-kubernetes-upgrade-202332
	d9be7b349f6e1       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   6 seconds ago       Running             kube-controller-manager   2                   a1baf3bff29f3       kube-controller-manager-kubernetes-upgrade-202332
	6005b157067dc       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   11 seconds ago      Exited              etcd                      1                   1f7ee769562ef       etcd-kubernetes-upgrade-202332
	02bc695aaa116       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   12 seconds ago      Exited              kube-apiserver            1                   c0cc20699bdad       kube-apiserver-kubernetes-upgrade-202332
	471b4f3eff73d       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   12 seconds ago      Exited              kube-controller-manager   1                   c6bac35fd90f3       kube-controller-manager-kubernetes-upgrade-202332
	2dff9356fb72b       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   12 seconds ago      Exited              kube-scheduler            1                   064118c5b33e5       kube-scheduler-kubernetes-upgrade-202332
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-202332
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-202332
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:23:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-202332
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:23:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:23:18 +0000   Wed, 31 Jul 2024 21:23:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:23:18 +0000   Wed, 31 Jul 2024 21:23:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:23:18 +0000   Wed, 31 Jul 2024 21:23:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:23:18 +0000   Wed, 31 Jul 2024 21:23:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    kubernetes-upgrade-202332
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c1c1adfbbbc4e7ab520c24b2dd8b049
	  System UUID:                5c1c1adf-bbbc-4e7a-b520-c24b2dd8b049
	  Boot ID:                    9cb64b36-8f0b-41b3-9347-8d97a266a2c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-202332                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16s
	  kube-system                 kube-apiserver-kubernetes-upgrade-202332             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-202332    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-kubernetes-upgrade-202332             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 23s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet  Node kubernetes-upgrade-202332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet  Node kubernetes-upgrade-202332 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet  Node kubernetes-upgrade-202332 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.931630] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.527437] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.491797] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.057700] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054711] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.191115] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.106727] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.257299] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +3.937861] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[  +1.712858] systemd-fstab-generator[859]: Ignoring "noauto" option for root device
	[  +0.056162] kauditd_printk_skb: 158 callbacks suppressed
	[Jul31 21:23] systemd-fstab-generator[1245]: Ignoring "noauto" option for root device
	[  +0.078323] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.727501] systemd-fstab-generator[1751]: Ignoring "noauto" option for root device
	[  +0.208570] systemd-fstab-generator[1795]: Ignoring "noauto" option for root device
	[  +0.193934] systemd-fstab-generator[1836]: Ignoring "noauto" option for root device
	[  +0.182401] systemd-fstab-generator[1849]: Ignoring "noauto" option for root device
	[  +0.341466] systemd-fstab-generator[1877]: Ignoring "noauto" option for root device
	[  +0.694410] kauditd_printk_skb: 198 callbacks suppressed
	[  +0.417288] systemd-fstab-generator[2209]: Ignoring "noauto" option for root device
	[  +2.206112] systemd-fstab-generator[2334]: Ignoring "noauto" option for root device
	[  +5.678030] systemd-fstab-generator[2611]: Ignoring "noauto" option for root device
	[  +0.087360] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> etcd [6005b157067dc7d1451e3da7766547bfd1ff4c4fc1e7e768f6203e927157bcab] <==
	{"level":"info","ts":"2024-07-31T21:23:11.301388Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-31T21:23:11.377651Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","commit-index":303}
	{"level":"info","ts":"2024-07-31T21:23:11.377743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-31T21:23:11.377774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became follower at term 2"}
	{"level":"info","ts":"2024-07-31T21:23:11.377796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f8926bd555ec3d0e [peers: [], term: 2, commit: 303, applied: 0, lastindex: 303, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-31T21:23:11.389027Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-31T21:23:11.45352Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":296}
	{"level":"info","ts":"2024-07-31T21:23:11.464462Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-31T21:23:11.472143Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f8926bd555ec3d0e","timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:23:11.472557Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2024-07-31T21:23:11.472617Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"f8926bd555ec3d0e","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T21:23:11.478816Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:23:11.48054Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T21:23:11.480657Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:23:11.489532Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:23:11.489594Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:23:11.480915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e switched to configuration voters=(17911497232019635470)"}
	{"level":"info","ts":"2024-07-31T21:23:11.489795Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","added-peer-id":"f8926bd555ec3d0e","added-peer-peer-urls":["https://192.168.39.10:2380"]}
	{"level":"info","ts":"2024-07-31T21:23:11.489946Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:23:11.489982Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:23:11.504501Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:23:11.504703Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:23:11.504724Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:23:11.504776Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-31T21:23:11.504782Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.10:2380"}
	
	
	==> etcd [ae014df9a2e493b38960b6c6a4f4b82852f8aba3d745799e9e33c93650fda19e] <==
	{"level":"info","ts":"2024-07-31T21:23:16.154161Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","added-peer-id":"f8926bd555ec3d0e","added-peer-peer-urls":["https://192.168.39.10:2380"]}
	{"level":"info","ts":"2024-07-31T21:23:16.15426Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:23:16.1543Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:23:16.163977Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:23:16.169738Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:23:16.174726Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:23:16.174879Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:23:16.17345Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-31T21:23:16.174995Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-07-31T21:23:17.509412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:23:17.509462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:23:17.509496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-07-31T21:23:17.509514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:23:17.50952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-07-31T21:23:17.509529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:23:17.509536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-07-31T21:23:17.51418Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:kubernetes-upgrade-202332 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:23:17.514445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:23:17.514491Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:23:17.515516Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:23:17.516168Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2024-07-31T21:23:17.517147Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:23:17.517402Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:23:17.517421Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:23:17.517884Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:23:22 up 0 min,  0 users,  load average: 0.45, 0.12, 0.04
	Linux kubernetes-upgrade-202332 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [02bc695aaa116fb684b3fa14620e2957abd3e074599d7c3cdc5c06adc77b6a39] <==
	I0731 21:23:10.999372       1 options.go:228] external host was not specified, using 192.168.39.10
	I0731 21:23:11.055549       1 server.go:142] Version: v1.31.0-beta.0
	I0731 21:23:11.056116       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [8e62315b2b3d38a8fe08a86ebc270b6c155986eb5d75723071f2994f62a93df4] <==
	I0731 21:23:18.701411       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0731 21:23:18.745420       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 21:23:18.745585       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 21:23:18.801547       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 21:23:18.801825       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 21:23:18.802692       1 aggregator.go:171] initial CRD sync complete...
	I0731 21:23:18.802719       1 autoregister_controller.go:144] Starting autoregister controller
	I0731 21:23:18.802725       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 21:23:18.803068       1 cache.go:39] Caches are synced for autoregister controller
	I0731 21:23:18.877037       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 21:23:18.884475       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 21:23:18.884582       1 policy_source.go:224] refreshing policies
	I0731 21:23:18.895224       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0731 21:23:18.895386       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0731 21:23:18.898649       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 21:23:18.898800       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 21:23:18.899095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 21:23:18.905241       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0731 21:23:18.908088       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 21:23:19.699427       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 21:23:20.451539       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 21:23:20.477939       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 21:23:20.517840       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 21:23:20.608354       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 21:23:20.616809       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [471b4f3eff73db670ab86a529f1314d9d8eaaa560911c06370f4cc386c98eca7] <==
	I0731 21:23:11.794480       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [d9be7b349f6e1ad1c17a29397ce40adb21e1ab7b694e5ecf66c3b221aab4bd85] <==
	I0731 21:23:22.128826       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0731 21:23:22.128880       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0731 21:23:22.128889       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0731 21:23:22.137175       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0731 21:23:22.163435       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0731 21:23:22.248894       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"kubernetes-upgrade-202332\" does not exist"
	I0731 21:23:22.307882       1 shared_informer.go:320] Caches are synced for node
	I0731 21:23:22.307943       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0731 21:23:22.307960       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0731 21:23:22.307965       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0731 21:23:22.307969       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0731 21:23:22.318510       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-202332" podCIDRs=["10.244.0.0/24"]
	I0731 21:23:22.318579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-202332"
	I0731 21:23:22.318712       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-202332"
	I0731 21:23:22.331878       1 shared_informer.go:320] Caches are synced for TTL
	I0731 21:23:22.351619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-202332"
	I0731 21:23:22.375390       1 shared_informer.go:320] Caches are synced for crt configmap
	I0731 21:23:22.391345       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0731 21:23:22.481748       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 21:23:22.529005       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0731 21:23:22.641868       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0731 21:23:22.688188       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 21:23:22.688233       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 21:23:22.689392       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 21:23:22.696465       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	
	
	==> kube-scheduler [2dff9356fb72bb2a30dbb9451ccd19d940a8788eea1fe9f92649cfa5550b5333] <==
	
	
	==> kube-scheduler [c1e062819db36ab97edd58de150d140a8f622684cbde8e1837d03d949c5ee3a4] <==
	I0731 21:23:16.588268       1 serving.go:386] Generated self-signed cert in-memory
	W0731 21:23:18.727645       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:23:18.727686       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:23:18.727697       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:23:18.727703       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:23:18.804073       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 21:23:18.804123       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:23:18.814039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:23:18.814136       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:23:18.814365       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:23:18.814450       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0731 21:23:18.915069       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469127    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14fd33a614a61771e0d56f1e9ad95c4e-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-202332\" (UID: \"14fd33a614a61771e0d56f1e9ad95c4e\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469171    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14fd33a614a61771e0d56f1e9ad95c4e-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-202332\" (UID: \"14fd33a614a61771e0d56f1e9ad95c4e\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469218    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c07e7c05b68bca0b36b570712f63a919-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-202332\" (UID: \"c07e7c05b68bca0b36b570712f63a919\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469260    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c07e7c05b68bca0b36b570712f63a919-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-202332\" (UID: \"c07e7c05b68bca0b36b570712f63a919\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469302    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c07e7c05b68bca0b36b570712f63a919-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-202332\" (UID: \"c07e7c05b68bca0b36b570712f63a919\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469392    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24a6153fc83fa0c646ce8900b186f6b0-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-202332\" (UID: \"24a6153fc83fa0c646ce8900b186f6b0\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469439    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c8af05815f25b4600afbba5444ce0efe-etcd-data\") pod \"etcd-kubernetes-upgrade-202332\" (UID: \"c8af05815f25b4600afbba5444ce0efe\") " pod="kube-system/etcd-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469496    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c07e7c05b68bca0b36b570712f63a919-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-202332\" (UID: \"c07e7c05b68bca0b36b570712f63a919\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.469608    2341 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c07e7c05b68bca0b36b570712f63a919-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-202332\" (UID: \"c07e7c05b68bca0b36b570712f63a919\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: E0731 21:23:15.476645    2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-202332?timeout=10s\": dial tcp 192.168.39.10:8443: connect: connection refused" interval="400ms"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.571463    2341 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: E0731 21:23:15.572300    2341 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.10:8443: connect: connection refused" node="kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.712453    2341 scope.go:117] "RemoveContainer" containerID="471b4f3eff73db670ab86a529f1314d9d8eaaa560911c06370f4cc386c98eca7"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.715527    2341 scope.go:117] "RemoveContainer" containerID="6005b157067dc7d1451e3da7766547bfd1ff4c4fc1e7e768f6203e927157bcab"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.716033    2341 scope.go:117] "RemoveContainer" containerID="02bc695aaa116fb684b3fa14620e2957abd3e074599d7c3cdc5c06adc77b6a39"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.717621    2341 scope.go:117] "RemoveContainer" containerID="2dff9356fb72bb2a30dbb9451ccd19d940a8788eea1fe9f92649cfa5550b5333"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: E0731 21:23:15.877759    2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-202332?timeout=10s\": dial tcp 192.168.39.10:8443: connect: connection refused" interval="800ms"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:15.974184    2341 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-202332"
	Jul 31 21:23:15 kubernetes-upgrade-202332 kubelet[2341]: E0731 21:23:15.975506    2341 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.10:8443: connect: connection refused" node="kubernetes-upgrade-202332"
	Jul 31 21:23:16 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:16.777389    2341 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-202332"
	Jul 31 21:23:18 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:18.936844    2341 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-202332"
	Jul 31 21:23:18 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:18.937061    2341 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-202332"
	Jul 31 21:23:19 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:19.253154    2341 apiserver.go:52] "Watching apiserver"
	Jul 31 21:23:19 kubernetes-upgrade-202332 kubelet[2341]: I0731 21:23:19.268563    2341 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 31 21:23:19 kubernetes-upgrade-202332 kubelet[2341]: E0731 21:23:19.389579    2341 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-202332\" already exists" pod="kube-system/etcd-kubernetes-upgrade-202332"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:23:22.005303 1146109 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19360-1093692/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-202332 -n kubernetes-upgrade-202332
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-202332 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5cfdc65f69-pmtmc coredns-5cfdc65f69-wtk6b kube-proxy-spwbx storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-202332 describe pod coredns-5cfdc65f69-pmtmc coredns-5cfdc65f69-wtk6b kube-proxy-spwbx storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-202332 describe pod coredns-5cfdc65f69-pmtmc coredns-5cfdc65f69-wtk6b kube-proxy-spwbx storage-provisioner: exit status 1 (70.812608ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5cfdc65f69-pmtmc" not found
	Error from server (NotFound): pods "coredns-5cfdc65f69-wtk6b" not found
	Error from server (NotFound): pods "kube-proxy-spwbx" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-202332 describe pod coredns-5cfdc65f69-pmtmc coredns-5cfdc65f69-wtk6b kube-proxy-spwbx storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-202332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-202332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-202332: (10.983024569s)
--- FAIL: TestKubernetesUpgrade (359.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (79.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-355751 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-355751 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.640244048s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-355751] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-355751" primary control-plane node in "pause-355751" cluster
	* Updating the running kvm2 "pause-355751" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-355751" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:16:20.793866 1140361 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:16:20.793984 1140361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:16:20.793992 1140361 out.go:304] Setting ErrFile to fd 2...
	I0731 21:16:20.793996 1140361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:16:20.794182 1140361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:16:20.794744 1140361 out.go:298] Setting JSON to false
	I0731 21:16:20.795877 1140361 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":17932,"bootTime":1722442649,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:16:20.795946 1140361 start.go:139] virtualization: kvm guest
	I0731 21:16:20.913365 1140361 out.go:177] * [pause-355751] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:16:21.048001 1140361 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:16:21.047986 1140361 notify.go:220] Checking for updates...
	I0731 21:16:21.195271 1140361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:16:21.216336 1140361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:16:21.258714 1140361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:16:21.482037 1140361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:16:21.707216 1140361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:16:21.822087 1140361 config.go:182] Loaded profile config "pause-355751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:16:21.822763 1140361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:16:21.822844 1140361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:16:21.843543 1140361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0731 21:16:21.844298 1140361 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:16:21.845157 1140361 main.go:141] libmachine: Using API Version  1
	I0731 21:16:21.845185 1140361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:16:21.845644 1140361 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:16:21.845881 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:21.846203 1140361 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:16:21.846741 1140361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:16:21.846793 1140361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:16:21.866204 1140361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
	I0731 21:16:21.866785 1140361 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:16:21.867398 1140361 main.go:141] libmachine: Using API Version  1
	I0731 21:16:21.867426 1140361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:16:21.867806 1140361 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:16:21.867999 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:21.971329 1140361 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:16:21.981315 1140361 start.go:297] selected driver: kvm2
	I0731 21:16:21.981342 1140361 start.go:901] validating driver "kvm2" against &{Name:pause-355751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-355751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:16:21.981539 1140361 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:16:21.982087 1140361 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:16:21.982189 1140361 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:16:21.999468 1140361 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:16:22.000535 1140361 cni.go:84] Creating CNI manager for ""
	I0731 21:16:22.000558 1140361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:16:22.000657 1140361 start.go:340] cluster config:
	{Name:pause-355751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-355751 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:16:22.000848 1140361 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:16:22.002773 1140361 out.go:177] * Starting "pause-355751" primary control-plane node in "pause-355751" cluster
	I0731 21:16:22.004001 1140361 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:16:22.004058 1140361 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:16:22.004069 1140361 cache.go:56] Caching tarball of preloaded images
	I0731 21:16:22.004198 1140361 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:16:22.004215 1140361 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:16:22.004335 1140361 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751/config.json ...
	I0731 21:16:22.004553 1140361 start.go:360] acquireMachinesLock for pause-355751: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:16:35.601215 1140361 start.go:364] duration metric: took 13.596601496s to acquireMachinesLock for "pause-355751"
	I0731 21:16:35.601282 1140361 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:16:35.601291 1140361 fix.go:54] fixHost starting: 
	I0731 21:16:35.601726 1140361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:16:35.601781 1140361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:16:35.622594 1140361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0731 21:16:35.623081 1140361 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:16:35.623680 1140361 main.go:141] libmachine: Using API Version  1
	I0731 21:16:35.623709 1140361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:16:35.624126 1140361 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:16:35.624342 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:35.624511 1140361 main.go:141] libmachine: (pause-355751) Calling .GetState
	I0731 21:16:35.626417 1140361 fix.go:112] recreateIfNeeded on pause-355751: state=Running err=<nil>
	W0731 21:16:35.626442 1140361 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:16:35.628614 1140361 out.go:177] * Updating the running kvm2 "pause-355751" VM ...
	I0731 21:16:35.629885 1140361 machine.go:94] provisionDockerMachine start ...
	I0731 21:16:35.629921 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:35.630191 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:35.633403 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:35.633934 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:35.634025 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:35.634287 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:35.634472 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:35.634647 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:35.634839 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:35.634999 1140361 main.go:141] libmachine: Using SSH client type: native
	I0731 21:16:35.635216 1140361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0731 21:16:35.635227 1140361 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:16:35.757397 1140361 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-355751
	
	I0731 21:16:35.757434 1140361 main.go:141] libmachine: (pause-355751) Calling .GetMachineName
	I0731 21:16:35.757762 1140361 buildroot.go:166] provisioning hostname "pause-355751"
	I0731 21:16:35.757829 1140361 main.go:141] libmachine: (pause-355751) Calling .GetMachineName
	I0731 21:16:35.758028 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:35.761478 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:35.761974 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:35.762011 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:35.762184 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:35.762418 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:35.762622 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:35.762816 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:35.763054 1140361 main.go:141] libmachine: Using SSH client type: native
	I0731 21:16:35.763352 1140361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0731 21:16:35.763369 1140361 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-355751 && echo "pause-355751" | sudo tee /etc/hostname
	I0731 21:16:35.898192 1140361 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-355751
	
	I0731 21:16:35.898227 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:35.901409 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:35.901800 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:35.901876 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:35.902164 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:35.902403 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:35.902646 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:35.902830 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:35.903065 1140361 main.go:141] libmachine: Using SSH client type: native
	I0731 21:16:35.903308 1140361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0731 21:16:35.903334 1140361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-355751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-355751/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-355751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:16:36.025998 1140361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:16:36.026035 1140361 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:16:36.026082 1140361 buildroot.go:174] setting up certificates
	I0731 21:16:36.026097 1140361 provision.go:84] configureAuth start
	I0731 21:16:36.026115 1140361 main.go:141] libmachine: (pause-355751) Calling .GetMachineName
	I0731 21:16:36.026491 1140361 main.go:141] libmachine: (pause-355751) Calling .GetIP
	I0731 21:16:36.029537 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:36.029923 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:36.029952 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:36.030140 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:36.032457 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:36.032817 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:36.032852 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:36.033127 1140361 provision.go:143] copyHostCerts
	I0731 21:16:36.033188 1140361 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:16:36.033198 1140361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:16:36.033268 1140361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:16:36.033431 1140361 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:16:36.033447 1140361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:16:36.033481 1140361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:16:36.033566 1140361 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:16:36.033576 1140361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:16:36.033607 1140361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:16:36.033690 1140361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.pause-355751 san=[127.0.0.1 192.168.39.123 localhost minikube pause-355751]
	I0731 21:16:36.430351 1140361 provision.go:177] copyRemoteCerts
	I0731 21:16:36.430433 1140361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:16:36.430467 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:36.433294 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:36.433583 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:36.433616 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:36.433864 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:36.434085 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:36.434284 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:36.434483 1140361 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/pause-355751/id_rsa Username:docker}
	I0731 21:16:36.519147 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:16:36.549101 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0731 21:16:36.580875 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:16:36.610541 1140361 provision.go:87] duration metric: took 584.427048ms to configureAuth
	I0731 21:16:36.610570 1140361 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:16:36.610769 1140361 config.go:182] Loaded profile config "pause-355751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:16:36.610850 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:36.613567 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:36.613985 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:36.614019 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:36.614184 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:36.614423 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:36.614585 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:36.614747 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:36.614907 1140361 main.go:141] libmachine: Using SSH client type: native
	I0731 21:16:36.615076 1140361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0731 21:16:36.615090 1140361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:16:42.143964 1140361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:16:42.144002 1140361 machine.go:97] duration metric: took 6.51409606s to provisionDockerMachine
	I0731 21:16:42.144022 1140361 start.go:293] postStartSetup for "pause-355751" (driver="kvm2")
	I0731 21:16:42.144033 1140361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:16:42.144050 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:42.144414 1140361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:16:42.144453 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:42.147188 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.147654 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:42.147685 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.147928 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:42.148190 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:42.148377 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:42.148528 1140361 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/pause-355751/id_rsa Username:docker}
	I0731 21:16:42.234992 1140361 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:16:42.239357 1140361 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:16:42.239386 1140361 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:16:42.239454 1140361 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:16:42.239528 1140361 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:16:42.239620 1140361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:16:42.249005 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:16:42.273412 1140361 start.go:296] duration metric: took 129.376032ms for postStartSetup
	I0731 21:16:42.273459 1140361 fix.go:56] duration metric: took 6.672169273s for fixHost
	I0731 21:16:42.273482 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:42.276387 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.276704 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:42.276735 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.276937 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:42.277157 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:42.277340 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:42.277461 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:42.277620 1140361 main.go:141] libmachine: Using SSH client type: native
	I0731 21:16:42.277795 1140361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0731 21:16:42.277806 1140361 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 21:16:42.389160 1140361 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722460602.379598190
	
	I0731 21:16:42.389188 1140361 fix.go:216] guest clock: 1722460602.379598190
	I0731 21:16:42.389196 1140361 fix.go:229] Guest: 2024-07-31 21:16:42.37959819 +0000 UTC Remote: 2024-07-31 21:16:42.27346279 +0000 UTC m=+21.530870990 (delta=106.1354ms)
	I0731 21:16:42.389218 1140361 fix.go:200] guest clock delta is within tolerance: 106.1354ms
	I0731 21:16:42.389224 1140361 start.go:83] releasing machines lock for "pause-355751", held for 6.78796877s
	I0731 21:16:42.389245 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:42.389567 1140361 main.go:141] libmachine: (pause-355751) Calling .GetIP
	I0731 21:16:42.392354 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.392721 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:42.392754 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.392929 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:42.393526 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:42.393741 1140361 main.go:141] libmachine: (pause-355751) Calling .DriverName
	I0731 21:16:42.393839 1140361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:16:42.393884 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:42.394006 1140361 ssh_runner.go:195] Run: cat /version.json
	I0731 21:16:42.394037 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHHostname
	I0731 21:16:42.396892 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.397041 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.397294 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:42.397321 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.397471 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:42.397493 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:42.397522 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:42.397654 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHPort
	I0731 21:16:42.397733 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:42.397830 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHKeyPath
	I0731 21:16:42.397916 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:42.397991 1140361 main.go:141] libmachine: (pause-355751) Calling .GetSSHUsername
	I0731 21:16:42.398077 1140361 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/pause-355751/id_rsa Username:docker}
	I0731 21:16:42.398163 1140361 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/pause-355751/id_rsa Username:docker}
	I0731 21:16:42.503295 1140361 ssh_runner.go:195] Run: systemctl --version
	I0731 21:16:42.510373 1140361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:16:42.654829 1140361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:16:42.665413 1140361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:16:42.665503 1140361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:16:42.675767 1140361 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 21:16:42.675806 1140361 start.go:495] detecting cgroup driver to use...
	I0731 21:16:42.675891 1140361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:16:42.693481 1140361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:16:42.708666 1140361 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:16:42.708741 1140361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:16:42.722787 1140361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:16:42.737762 1140361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:16:42.885362 1140361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:16:43.035480 1140361 docker.go:233] disabling docker service ...
	I0731 21:16:43.035561 1140361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:16:43.053601 1140361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:16:43.068137 1140361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:16:43.207229 1140361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:16:43.337476 1140361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:16:43.351280 1140361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:16:43.369973 1140361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:16:43.370059 1140361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:16:43.380900 1140361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:16:43.380993 1140361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:16:43.391638 1140361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:16:43.402157 1140361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:16:43.412738 1140361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:16:43.423505 1140361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:16:43.435246 1140361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:16:43.447383 1140361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:16:43.457936 1140361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:16:43.467611 1140361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:16:43.477881 1140361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:16:43.631194 1140361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:16:52.089155 1140361 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.457905789s)
	I0731 21:16:52.089199 1140361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:16:52.089264 1140361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:16:52.094618 1140361 start.go:563] Will wait 60s for crictl version
	I0731 21:16:52.094703 1140361 ssh_runner.go:195] Run: which crictl
	I0731 21:16:52.099414 1140361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:16:52.133092 1140361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:16:52.133198 1140361 ssh_runner.go:195] Run: crio --version
	I0731 21:16:52.167153 1140361 ssh_runner.go:195] Run: crio --version
	I0731 21:16:52.198903 1140361 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:16:52.200235 1140361 main.go:141] libmachine: (pause-355751) Calling .GetIP
	I0731 21:16:52.203571 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:52.203929 1140361 main.go:141] libmachine: (pause-355751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:44:60", ip: ""} in network mk-pause-355751: {Iface:virbr2 ExpiryTime:2024-07-31 22:15:24 +0000 UTC Type:0 Mac:52:54:00:17:44:60 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:pause-355751 Clientid:01:52:54:00:17:44:60}
	I0731 21:16:52.203968 1140361 main.go:141] libmachine: (pause-355751) DBG | domain pause-355751 has defined IP address 192.168.39.123 and MAC address 52:54:00:17:44:60 in network mk-pause-355751
	I0731 21:16:52.204201 1140361 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:16:52.208720 1140361 kubeadm.go:883] updating cluster {Name:pause-355751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-355751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:16:52.208927 1140361 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:16:52.209009 1140361 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:16:52.257287 1140361 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:16:52.257325 1140361 crio.go:433] Images already preloaded, skipping extraction
	I0731 21:16:52.257402 1140361 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:16:52.298775 1140361 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:16:52.298812 1140361 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:16:52.298821 1140361 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.30.3 crio true true} ...
	I0731 21:16:52.298938 1140361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-355751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-355751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:16:52.299023 1140361 ssh_runner.go:195] Run: crio config
	I0731 21:16:52.363786 1140361 cni.go:84] Creating CNI manager for ""
	I0731 21:16:52.363811 1140361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:16:52.363823 1140361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:16:52.363856 1140361 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-355751 NodeName:pause-355751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:16:52.364035 1140361 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-355751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:16:52.364130 1140361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:16:52.377176 1140361 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:16:52.377248 1140361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:16:52.389322 1140361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 21:16:52.408769 1140361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:16:52.427931 1140361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0731 21:16:52.450248 1140361 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0731 21:16:52.456210 1140361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:16:52.604359 1140361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:16:52.625588 1140361 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751 for IP: 192.168.39.123
	I0731 21:16:52.625608 1140361 certs.go:194] generating shared ca certs ...
	I0731 21:16:52.625623 1140361 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:16:52.625758 1140361 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:16:52.625798 1140361 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:16:52.625805 1140361 certs.go:256] generating profile certs ...
	I0731 21:16:52.625879 1140361 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751/client.key
	I0731 21:16:52.625930 1140361 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751/apiserver.key.edc11a9e
	I0731 21:16:52.625960 1140361 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751/proxy-client.key
	I0731 21:16:52.626068 1140361 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:16:52.626093 1140361 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:16:52.626100 1140361 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:16:52.626120 1140361 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:16:52.626141 1140361 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:16:52.626159 1140361 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:16:52.626193 1140361 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:16:52.626834 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:16:52.656884 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:16:52.684923 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:16:52.716489 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:16:52.752035 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 21:16:52.785782 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:16:52.814322 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:16:52.884994 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/pause-355751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:16:52.945037 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:16:53.065620 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:16:53.149785 1140361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:16:53.263939 1140361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:16:53.304759 1140361 ssh_runner.go:195] Run: openssl version
	I0731 21:16:53.327922 1140361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:16:53.369700 1140361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:16:53.380210 1140361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:16:53.380288 1140361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:16:53.386544 1140361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:16:53.398338 1140361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:16:53.470711 1140361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:16:53.479057 1140361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:16:53.479142 1140361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:16:53.517031 1140361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:16:53.564716 1140361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:16:53.615121 1140361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:16:53.629907 1140361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:16:53.630004 1140361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:16:53.660849 1140361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:16:53.682162 1140361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:16:53.690631 1140361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:16:53.704430 1140361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:16:53.725107 1140361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:16:53.750769 1140361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:16:53.766024 1140361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:16:53.792077 1140361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:16:53.806433 1140361 kubeadm.go:392] StartCluster: {Name:pause-355751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-355751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:16:53.806598 1140361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:16:53.806667 1140361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:16:54.017273 1140361 cri.go:89] found id: "3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9"
	I0731 21:16:54.017304 1140361 cri.go:89] found id: "42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595"
	I0731 21:16:54.017311 1140361 cri.go:89] found id: "d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0"
	I0731 21:16:54.017316 1140361 cri.go:89] found id: "283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b"
	I0731 21:16:54.017320 1140361 cri.go:89] found id: "e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e"
	I0731 21:16:54.017325 1140361 cri.go:89] found id: "c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d"
	I0731 21:16:54.017328 1140361 cri.go:89] found id: "b000641aa8f9f5b7a88c75a4f4771fdc5fa566c4c4977c9ab7cf2176fcee160f"
	I0731 21:16:54.017337 1140361 cri.go:89] found id: "e8bd9ec58f2261f3896b75a0287718644d4eb8b3681ba267c708b9b8d182b6ad"
	I0731 21:16:54.017342 1140361 cri.go:89] found id: "20995b3b9e269e3c1c1bc16c568ed7017dffe0ccf2f6b76abed9032449537174"
	I0731 21:16:54.017349 1140361 cri.go:89] found id: "8d5372499f65c9e704316ee8725db44dd63c3d10c8ad40d45fe6ff63498806be"
	I0731 21:16:54.017352 1140361 cri.go:89] found id: "412f8517f07b3b4247fa5ca9065774d157eec3695560a456fac684734d97925c"
	I0731 21:16:54.017356 1140361 cri.go:89] found id: "0904484864bf039a7005f2660fa8c9433dd7064f48f605afd23ea7a7166c60ea"
	I0731 21:16:54.017360 1140361 cri.go:89] found id: ""
	I0731 21:16:54.017422 1140361 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-355751 -n pause-355751
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-355751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-355751 logs -n 25: (1.402525408s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-605794 sudo cat              | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo cat              | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo                  | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo                  | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo                  | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo find             | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo crio             | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-605794                       | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC | 31 Jul 24 21:14 UTC |
	| start   | -p force-systemd-flag-406944           | force-systemd-flag-406944 | jenkins | v1.33.1 | 31 Jul 24 21:15 UTC | 31 Jul 24 21:16 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:15 UTC | 31 Jul 24 21:16 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-084648              | running-upgrade-084648    | jenkins | v1.33.1 | 31 Jul 24 21:15 UTC | 31 Jul 24 21:17 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:16 UTC |
	| start   | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:16 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-355751                        | pause-355751              | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:17 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-406944 ssh cat      | force-systemd-flag-406944 | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:16 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-406944           | force-systemd-flag-406944 | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:16 UTC |
	| start   | -p cert-expiration-238338              | cert-expiration-238338    | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:17 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-081034 sudo            | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	| start   | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-084648              | running-upgrade-084648    | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	| start   | -p cert-options-425308                 | cert-options-425308       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-081034 sudo            | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	| start   | -p kubernetes-upgrade-202332           | kubernetes-upgrade-202332 | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:17:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:17:34.939754 1141656 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:17:34.939864 1141656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:17:34.939869 1141656 out.go:304] Setting ErrFile to fd 2...
	I0731 21:17:34.939873 1141656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:17:34.940087 1141656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:17:34.940794 1141656 out.go:298] Setting JSON to false
	I0731 21:17:34.941970 1141656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18006,"bootTime":1722442649,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:17:34.942043 1141656 start.go:139] virtualization: kvm guest
	I0731 21:17:34.944143 1141656 out.go:177] * [kubernetes-upgrade-202332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:17:34.945463 1141656 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:17:34.945489 1141656 notify.go:220] Checking for updates...
	I0731 21:17:34.948015 1141656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:17:34.949284 1141656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:17:34.950603 1141656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:17:34.951789 1141656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:17:34.953035 1141656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:17:34.954861 1141656 config.go:182] Loaded profile config "cert-expiration-238338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.954981 1141656 config.go:182] Loaded profile config "cert-options-425308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.955119 1141656 config.go:182] Loaded profile config "pause-355751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.955241 1141656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:17:34.996580 1141656 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:17:34.998984 1141656 start.go:297] selected driver: kvm2
	I0731 21:17:34.999025 1141656 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:17:34.999057 1141656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:17:35.000305 1141656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:17:35.000414 1141656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:17:35.018244 1141656 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:17:35.018319 1141656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:17:35.018619 1141656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 21:17:35.018683 1141656 cni.go:84] Creating CNI manager for ""
	I0731 21:17:35.018699 1141656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:17:35.018708 1141656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:17:35.018791 1141656 start.go:340] cluster config:
	{Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:17:35.018922 1141656 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:17:35.020671 1141656 out.go:177] * Starting "kubernetes-upgrade-202332" primary control-plane node in "kubernetes-upgrade-202332" cluster
	I0731 21:17:32.938104 1140361 addons.go:510] duration metric: took 3.030079ms for enable addons: enabled=[]
	I0731 21:17:32.938148 1140361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:17:33.096118 1140361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:17:33.116269 1140361 node_ready.go:35] waiting up to 6m0s for node "pause-355751" to be "Ready" ...
	I0731 21:17:33.119402 1140361 node_ready.go:49] node "pause-355751" has status "Ready":"True"
	I0731 21:17:33.119437 1140361 node_ready.go:38] duration metric: took 3.119387ms for node "pause-355751" to be "Ready" ...
	I0731 21:17:33.119452 1140361 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:17:33.124204 1140361 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmxvr" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:33.493877 1140361 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmxvr" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:33.493914 1140361 pod_ready.go:81] duration metric: took 369.684776ms for pod "coredns-7db6d8ff4d-mmxvr" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:33.493929 1140361 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:33.894061 1140361 pod_ready.go:92] pod "etcd-pause-355751" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:33.894093 1140361 pod_ready.go:81] duration metric: took 400.154155ms for pod "etcd-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:33.894111 1140361 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:34.300651 1140361 pod_ready.go:92] pod "kube-apiserver-pause-355751" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:34.300688 1140361 pod_ready.go:81] duration metric: took 406.567042ms for pod "kube-apiserver-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:34.300703 1140361 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:34.693593 1140361 pod_ready.go:92] pod "kube-controller-manager-pause-355751" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:34.693624 1140361 pod_ready.go:81] duration metric: took 392.913016ms for pod "kube-controller-manager-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:34.693636 1140361 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5gxch" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:35.093276 1140361 pod_ready.go:92] pod "kube-proxy-5gxch" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:35.093308 1140361 pod_ready.go:81] duration metric: took 399.664326ms for pod "kube-proxy-5gxch" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:35.093320 1140361 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:35.494032 1140361 pod_ready.go:92] pod "kube-scheduler-pause-355751" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:35.494061 1140361 pod_ready.go:81] duration metric: took 400.731976ms for pod "kube-scheduler-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:35.494072 1140361 pod_ready.go:38] duration metric: took 2.374606737s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:17:35.494093 1140361 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:17:35.494169 1140361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:17:35.508361 1140361 api_server.go:72] duration metric: took 2.57333187s to wait for apiserver process to appear ...
	I0731 21:17:35.508386 1140361 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:17:35.508407 1140361 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I0731 21:17:35.513460 1140361 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I0731 21:17:35.514756 1140361 api_server.go:141] control plane version: v1.30.3
	I0731 21:17:35.514785 1140361 api_server.go:131] duration metric: took 6.391832ms to wait for apiserver health ...
	I0731 21:17:35.514795 1140361 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:17:35.696362 1140361 system_pods.go:59] 6 kube-system pods found
	I0731 21:17:35.696395 1140361 system_pods.go:61] "coredns-7db6d8ff4d-mmxvr" [1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5] Running
	I0731 21:17:35.696399 1140361 system_pods.go:61] "etcd-pause-355751" [77fecdb8-f837-4a6a-ae63-3b5674e1deab] Running
	I0731 21:17:35.696403 1140361 system_pods.go:61] "kube-apiserver-pause-355751" [fa097351-c7a0-42d0-a7de-49c912822a8e] Running
	I0731 21:17:35.696410 1140361 system_pods.go:61] "kube-controller-manager-pause-355751" [ee8b6672-0856-4e5f-9a5d-5e829641fce5] Running
	I0731 21:17:35.696413 1140361 system_pods.go:61] "kube-proxy-5gxch" [12d54d0f-6c0e-4234-a2b1-04a55f854cc5] Running
	I0731 21:17:35.696416 1140361 system_pods.go:61] "kube-scheduler-pause-355751" [2e330208-70c3-409c-891e-4cc48386f8f9] Running
	I0731 21:17:35.696423 1140361 system_pods.go:74] duration metric: took 181.622041ms to wait for pod list to return data ...
	I0731 21:17:35.696430 1140361 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:17:35.893791 1140361 default_sa.go:45] found service account: "default"
	I0731 21:17:35.893820 1140361 default_sa.go:55] duration metric: took 197.383056ms for default service account to be created ...
	I0731 21:17:35.893830 1140361 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:17:36.096766 1140361 system_pods.go:86] 6 kube-system pods found
	I0731 21:17:36.096807 1140361 system_pods.go:89] "coredns-7db6d8ff4d-mmxvr" [1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5] Running
	I0731 21:17:36.096816 1140361 system_pods.go:89] "etcd-pause-355751" [77fecdb8-f837-4a6a-ae63-3b5674e1deab] Running
	I0731 21:17:36.096822 1140361 system_pods.go:89] "kube-apiserver-pause-355751" [fa097351-c7a0-42d0-a7de-49c912822a8e] Running
	I0731 21:17:36.096829 1140361 system_pods.go:89] "kube-controller-manager-pause-355751" [ee8b6672-0856-4e5f-9a5d-5e829641fce5] Running
	I0731 21:17:36.096835 1140361 system_pods.go:89] "kube-proxy-5gxch" [12d54d0f-6c0e-4234-a2b1-04a55f854cc5] Running
	I0731 21:17:36.096841 1140361 system_pods.go:89] "kube-scheduler-pause-355751" [2e330208-70c3-409c-891e-4cc48386f8f9] Running
	I0731 21:17:36.096852 1140361 system_pods.go:126] duration metric: took 203.014702ms to wait for k8s-apps to be running ...
	I0731 21:17:36.096861 1140361 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:17:36.096921 1140361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:17:36.113677 1140361 system_svc.go:56] duration metric: took 16.799579ms WaitForService to wait for kubelet
	I0731 21:17:36.113714 1140361 kubeadm.go:582] duration metric: took 3.178692026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:17:36.113735 1140361 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:17:36.293791 1140361 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:17:36.293824 1140361 node_conditions.go:123] node cpu capacity is 2
	I0731 21:17:36.293836 1140361 node_conditions.go:105] duration metric: took 180.096646ms to run NodePressure ...
	I0731 21:17:36.293848 1140361 start.go:241] waiting for startup goroutines ...
	I0731 21:17:36.293855 1140361 start.go:246] waiting for cluster config update ...
	I0731 21:17:36.293862 1140361 start.go:255] writing updated cluster config ...
	I0731 21:17:36.294212 1140361 ssh_runner.go:195] Run: rm -f paused
	I0731 21:17:36.361937 1140361 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:17:36.363931 1140361 out.go:177] * Done! kubectl is now configured to use "pause-355751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.022460087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460657022439845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=feb6042d-6d5b-4313-aa9f-c9227889ecd1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.023205506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43f49a98-2e30-465b-abe8-2ea91645b506 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.023292617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43f49a98-2e30-465b-abe8-2ea91645b506 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.023558691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722460642715723078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722460642710812165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722460638896838272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annot
ations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722460638907064083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722460638887324163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernet
es.container.hash: faa820ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722460638898923694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722460613737816561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86
780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722460613436253243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722460613374954825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722460613327724568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernetes.container.hash: faa820ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722460613300969090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722460613106249626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annotations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43f49a98-2e30-465b-abe8-2ea91645b506 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.067235833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86c778e4-01c4-4955-9f9b-6145c89e315b name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.067326319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86c778e4-01c4-4955-9f9b-6145c89e315b name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.068528712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b314cf22-f97e-48c1-b69b-d2fe12541ff9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.068993842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460657068970949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b314cf22-f97e-48c1-b69b-d2fe12541ff9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.069556466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9870359-145d-4068-9953-2c42f98ea08f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.069608610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9870359-145d-4068-9953-2c42f98ea08f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.069875572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722460642715723078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722460642710812165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722460638896838272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annot
ations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722460638907064083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722460638887324163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernet
es.container.hash: faa820ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722460638898923694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722460613737816561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86
780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722460613436253243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722460613374954825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722460613327724568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernetes.container.hash: faa820ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722460613300969090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722460613106249626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annotations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9870359-145d-4068-9953-2c42f98ea08f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.113074732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72613ddf-f9dd-42e6-aa34-9127b58d668c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.113171439Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72613ddf-f9dd-42e6-aa34-9127b58d668c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.114878744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=404beda7-7970-48f1-9d04-fdbf5d618fff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.115249113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460657115226270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=404beda7-7970-48f1-9d04-fdbf5d618fff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.115991608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8088aa55-fd11-46e1-ad30-c5a31de67633 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.116049067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8088aa55-fd11-46e1-ad30-c5a31de67633 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.116344297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722460642715723078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722460642710812165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722460638896838272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annot
ations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722460638907064083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722460638887324163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernet
es.container.hash: faa820ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722460638898923694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722460613737816561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86
780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722460613436253243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722460613374954825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722460613327724568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernetes.container.hash: faa820ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722460613300969090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722460613106249626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annotations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8088aa55-fd11-46e1-ad30-c5a31de67633 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.158041137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cec1e06e-4e10-4256-9215-98bf6831c7ed name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.158120994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cec1e06e-4e10-4256-9215-98bf6831c7ed name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.159196726Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0a33c50-2a11-446b-8aa1-9f63bf91b39b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.159801100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460657159705840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0a33c50-2a11-446b-8aa1-9f63bf91b39b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.160374494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=251d41b6-8c24-4b5f-8545-5d66beb9d3af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.160429286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=251d41b6-8c24-4b5f-8545-5d66beb9d3af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:37 pause-355751 crio[2242]: time="2024-07-31 21:17:37.160705043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722460642715723078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722460642710812165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722460638896838272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annot
ations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722460638907064083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722460638887324163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernet
es.container.hash: faa820ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722460638898923694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722460613737816561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86
780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722460613436253243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722460613374954825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722460613327724568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernetes.container.hash: faa820ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722460613300969090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722460613106249626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annotations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=251d41b6-8c24-4b5f-8545-5d66beb9d3af name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51acdff4be1b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 seconds ago      Running             coredns                   2                   7b41ba04371fb       coredns-7db6d8ff4d-mmxvr
	dd6c692486669       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   14 seconds ago      Running             kube-proxy                2                   7dc721d4d2102       kube-proxy-5gxch
	3cee4515db0ba       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   18 seconds ago      Running             kube-scheduler            2                   41c6ec046bff1       kube-scheduler-pause-355751
	b72e38826aabf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   18 seconds ago      Running             kube-controller-manager   2                   384b594b16054       kube-controller-manager-pause-355751
	def11b9db4f10       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 seconds ago      Running             etcd                      2                   85706cead17d8       etcd-pause-355751
	dee0fd326ad4a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   18 seconds ago      Running             kube-apiserver            2                   c6b505336a6b0       kube-apiserver-pause-355751
	3700334b2c0a2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   7b41ba04371fb       coredns-7db6d8ff4d-mmxvr
	42127e81d231b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   43 seconds ago      Exited              kube-proxy                1                   7dc721d4d2102       kube-proxy-5gxch
	d6b33a52034f7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   43 seconds ago      Exited              kube-scheduler            1                   41c6ec046bff1       kube-scheduler-pause-355751
	283c006e5fba0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   43 seconds ago      Exited              kube-apiserver            1                   c6b505336a6b0       kube-apiserver-pause-355751
	e88a66dc8ea8a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   43 seconds ago      Exited              kube-controller-manager   1                   384b594b16054       kube-controller-manager-pause-355751
	c3d4757f05185       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   44 seconds ago      Exited              etcd                      1                   85706cead17d8       etcd-pause-355751
	
	
	==> coredns [3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:57238 - 46564 "HINFO IN 1929438316946666935.7327791819667359449. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.101705404s
	
	
	==> coredns [51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56175 - 45202 "HINFO IN 3506816275912152322.1820623102528801169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020669567s
	
	
	==> describe nodes <==
	Name:               pause-355751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-355751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=pause-355751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_15_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:15:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-355751
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:17:22 +0000   Wed, 31 Jul 2024 21:15:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:17:22 +0000   Wed, 31 Jul 2024 21:15:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:17:22 +0000   Wed, 31 Jul 2024 21:15:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:17:22 +0000   Wed, 31 Jul 2024 21:15:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    pause-355751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 673f805b8f9847568a9c148fe16f391e
	  System UUID:                673f805b-8f98-4756-8a9c-148fe16f391e
	  Boot ID:                    8d345fc6-e35f-491c-a683-20c76749cc5f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-mmxvr                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     89s
	  kube-system                 etcd-pause-355751                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         106s
	  kube-system                 kube-apiserver-pause-355751             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-controller-manager-pause-355751    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-proxy-5gxch                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-pause-355751             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 87s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   Starting                 40s                kube-proxy       
	  Normal   NodeHasSufficientMemory  106s               kubelet          Node pause-355751 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  106s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    106s               kubelet          Node pause-355751 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s               kubelet          Node pause-355751 status is now: NodeHasSufficientPID
	  Normal   Starting                 106s               kubelet          Starting kubelet.
	  Normal   NodeReady                105s               kubelet          Node pause-355751 status is now: NodeReady
	  Normal   RegisteredNode           92s                node-controller  Node pause-355751 event: Registered Node pause-355751 in Controller
	  Warning  ContainerGCFailed        46s                kubelet          [rpc error: code = Unavailable desc = error reading from server: read unix @->/var/run/crio/crio.sock: read: connection reset by peer, rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"]
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node pause-355751 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node pause-355751 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node pause-355751 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3s                 node-controller  Node pause-355751 event: Registered Node pause-355751 in Controller
	
	
	==> dmesg <==
	[  +0.064688] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.174300] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.144788] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.317716] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.466322] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.063886] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.478013] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.629359] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.966945] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.087831] kauditd_printk_skb: 41 callbacks suppressed
	[Jul31 21:16] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[  +0.080744] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.917719] kauditd_printk_skb: 69 callbacks suppressed
	[ +23.262506] systemd-fstab-generator[2155]: Ignoring "noauto" option for root device
	[  +0.151849] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.182057] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.132873] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.278363] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[  +8.973155] systemd-fstab-generator[2352]: Ignoring "noauto" option for root device
	[  +0.082465] kauditd_printk_skb: 100 callbacks suppressed
	[Jul31 21:17] kauditd_printk_skb: 87 callbacks suppressed
	[ +13.073103] systemd-fstab-generator[3248]: Ignoring "noauto" option for root device
	[  +4.592795] kauditd_printk_skb: 41 callbacks suppressed
	[ +10.218504] systemd-fstab-generator[3690]: Ignoring "noauto" option for root device
	[  +0.089025] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d] <==
	{"level":"info","ts":"2024-07-31T21:16:53.806211Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:16:54.990828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:16:54.990976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:16:54.991029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgPreVoteResp from 4c9b6dd9118b591e at term 2"}
	{"level":"info","ts":"2024-07-31T21:16:54.991066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:16:54.991096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgVoteResp from 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-07-31T21:16:54.991124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:16:54.991149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c9b6dd9118b591e elected leader 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-07-31T21:16:54.999976Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4c9b6dd9118b591e","local-member-attributes":"{Name:pause-355751 ClientURLs:[https://192.168.39.123:2379]}","request-path":"/0/members/4c9b6dd9118b591e/attributes","cluster-id":"b780dcaae8448687","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:16:55.00017Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:16:55.000475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:16:55.000648Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:16:55.00068Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:16:55.004397Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2024-07-31T21:16:55.005358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:17:16.222766Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T21:17:16.222837Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-355751","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"]}
	{"level":"warn","ts":"2024-07-31T21:17:16.22295Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:17:16.222975Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:17:16.224521Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.123:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:17:16.2246Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.123:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T21:17:16.224851Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4c9b6dd9118b591e","current-leader-member-id":"4c9b6dd9118b591e"}
	{"level":"info","ts":"2024-07-31T21:17:16.421204Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:17:16.421647Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:17:16.421734Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-355751","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"]}
	
	
	==> etcd [def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2] <==
	{"level":"info","ts":"2024-07-31T21:17:19.602084Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:17:19.602121Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:17:19.603654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e switched to configuration voters=(5520126547342350622)"}
	{"level":"info","ts":"2024-07-31T21:17:19.60378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","added-peer-id":"4c9b6dd9118b591e","added-peer-peer-urls":["https://192.168.39.123:2380"]}
	{"level":"info","ts":"2024-07-31T21:17:19.603955Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:17:19.604016Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:17:19.621213Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:17:19.628014Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4c9b6dd9118b591e","initial-advertise-peer-urls":["https://192.168.39.123:2380"],"listen-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:17:19.621866Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:17:19.629808Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:17:19.631827Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:17:20.607723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T21:17:20.607916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:17:20.607977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgPreVoteResp from 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-07-31T21:17:20.608036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T21:17:20.608066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgVoteResp from 4c9b6dd9118b591e at term 4"}
	{"level":"info","ts":"2024-07-31T21:17:20.608101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became leader at term 4"}
	{"level":"info","ts":"2024-07-31T21:17:20.608134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c9b6dd9118b591e elected leader 4c9b6dd9118b591e at term 4"}
	{"level":"info","ts":"2024-07-31T21:17:20.614457Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:17:20.614496Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4c9b6dd9118b591e","local-member-attributes":"{Name:pause-355751 ClientURLs:[https://192.168.39.123:2379]}","request-path":"/0/members/4c9b6dd9118b591e/attributes","cluster-id":"b780dcaae8448687","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:17:20.614649Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:17:20.616359Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2024-07-31T21:17:20.617579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:17:20.617823Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:17:20.617851Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:17:37 up 2 min,  0 users,  load average: 1.72, 0.68, 0.25
	Linux pause-355751 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b] <==
	I0731 21:17:04.513443       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0731 21:17:04.513514       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0731 21:17:04.513561       1 available_controller.go:439] Shutting down AvailableConditionController
	I0731 21:17:04.513568       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0731 21:17:04.513577       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0731 21:17:04.513583       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0731 21:17:04.513589       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0731 21:17:04.513593       1 naming_controller.go:302] Shutting down NamingConditionController
	I0731 21:17:04.513600       1 controller.go:167] Shutting down OpenAPI controller
	I0731 21:17:04.513607       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0731 21:17:04.513650       1 controller.go:157] Shutting down quota evaluator
	I0731 21:17:04.515414       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.514505       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0731 21:17:04.514543       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0731 21:17:04.515455       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.515463       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.515468       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.515472       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.514548       1 establishing_controller.go:87] Shutting down EstablishingController
	I0731 21:17:04.514556       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0731 21:17:04.513226       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0731 21:17:04.515530       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0731 21:17:04.518863       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 21:17:04.515002       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 21:17:04.515083       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-apiserver [dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2] <==
	I0731 21:17:22.128240       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 21:17:22.135104       1 aggregator.go:165] initial CRD sync complete...
	I0731 21:17:22.135140       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 21:17:22.135149       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 21:17:22.142858       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 21:17:22.142897       1 policy_source.go:224] refreshing policies
	I0731 21:17:22.219081       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 21:17:22.219088       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 21:17:22.226845       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 21:17:22.227690       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 21:17:22.228286       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 21:17:22.228329       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 21:17:22.228451       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 21:17:22.229630       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 21:17:22.232792       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 21:17:22.235415       1 cache.go:39] Caches are synced for autoregister controller
	I0731 21:17:23.030200       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 21:17:23.336833       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.123]
	I0731 21:17:23.338247       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 21:17:23.344210       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 21:17:23.697686       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 21:17:23.710430       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 21:17:23.757412       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 21:17:23.807593       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 21:17:23.820440       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73] <==
	I0731 21:17:34.376181       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 21:17:34.376352       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 21:17:34.379296       1 shared_informer.go:320] Caches are synced for job
	I0731 21:17:34.385927       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0731 21:17:34.386076       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 21:17:34.386127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.151µs"
	I0731 21:17:34.388846       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 21:17:34.389412       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 21:17:34.389474       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 21:17:34.389481       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 21:17:34.389490       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 21:17:34.391967       1 shared_informer.go:320] Caches are synced for taint
	I0731 21:17:34.392151       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0731 21:17:34.392250       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-355751"
	I0731 21:17:34.392396       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 21:17:34.394821       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 21:17:34.396727       1 shared_informer.go:320] Caches are synced for GC
	I0731 21:17:34.398261       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0731 21:17:34.449656       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 21:17:34.494904       1 shared_informer.go:320] Caches are synced for disruption
	I0731 21:17:34.572884       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:17:34.578090       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:17:34.983092       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 21:17:34.983148       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 21:17:35.013347       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e] <==
	I0731 21:16:59.323483       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0731 21:16:59.323632       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0731 21:16:59.323721       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0731 21:16:59.372528       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0731 21:16:59.372662       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	E0731 21:16:59.423376       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0731 21:16:59.423417       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0731 21:16:59.473211       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0731 21:16:59.473351       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0731 21:16:59.473330       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0731 21:16:59.473454       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0731 21:16:59.523360       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0731 21:16:59.523594       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0731 21:16:59.523682       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0731 21:16:59.572908       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0731 21:16:59.573197       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0731 21:16:59.573240       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0731 21:16:59.623620       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0731 21:16:59.623825       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0731 21:16:59.624007       1 shared_informer.go:313] Waiting for caches to sync for TTL
	W0731 21:17:09.674499       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.123:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.123:8443: connect: connection refused
	W0731 21:17:10.175317       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.123:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.123:8443: connect: connection refused
	W0731 21:17:11.176120       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.123:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.123:8443: connect: connection refused
	W0731 21:17:13.177482       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.123:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.123:8443: connect: connection refused
	E0731 21:17:13.177610       1 cidr_allocator.go:146] "Failed to list all nodes" err="Get \"https://192.168.39.123:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-ipam-controller"
	
	
	==> kube-proxy [42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595] <==
	I0731 21:16:54.620895       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:16:56.654290       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	I0731 21:16:56.696522       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:16:56.696624       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:16:56.696659       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:16:56.699018       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:16:56.699243       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:16:56.699496       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:16:56.700591       1 config.go:192] "Starting service config controller"
	I0731 21:16:56.700841       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:16:56.700975       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:16:56.701040       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:16:56.701553       1 config.go:319] "Starting node config controller"
	I0731 21:16:56.704646       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:16:56.801589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:16:56.801602       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:16:56.806956       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb] <==
	I0731 21:17:22.850111       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:17:22.869184       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	I0731 21:17:22.903628       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:17:22.903810       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:17:22.903871       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:17:22.906500       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:17:22.906694       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:17:22.906978       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:17:22.908711       1 config.go:192] "Starting service config controller"
	I0731 21:17:22.909010       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:17:22.909082       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:17:22.909111       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:17:22.910200       1 config.go:319] "Starting node config controller"
	I0731 21:17:22.910312       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:17:23.009502       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:17:23.009665       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:17:23.010572       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb] <==
	I0731 21:17:20.132400       1 serving.go:380] Generated self-signed cert in-memory
	W0731 21:17:22.123185       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:17:22.123273       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:17:22.123303       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:17:22.123327       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:17:22.142634       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:17:22.142879       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:17:22.148266       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:17:22.148313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:17:22.148427       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:17:22.148538       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:17:22.249553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0] <==
	I0731 21:16:55.564394       1 serving.go:380] Generated self-signed cert in-memory
	W0731 21:16:56.581411       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:16:56.581537       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:16:56.581567       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:16:56.581622       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:16:56.637263       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:16:56.638696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:16:56.654230       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:16:56.654478       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:16:56.654536       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:16:56.654577       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:16:56.755112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 21:17:04.358520       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.620383    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1555b141973e714407462de2a0cd7cb-ca-certs\") pod \"kube-controller-manager-pause-355751\" (UID: \"c1555b141973e714407462de2a0cd7cb\") " pod="kube-system/kube-controller-manager-pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.620425    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1555b141973e714407462de2a0cd7cb-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-355751\" (UID: \"c1555b141973e714407462de2a0cd7cb\") " pod="kube-system/kube-controller-manager-pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.620484    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/736c8ddf630aaaf8e7cf6c539aaecc56-etcd-data\") pod \"etcd-pause-355751\" (UID: \"736c8ddf630aaaf8e7cf6c539aaecc56\") " pod="kube-system/etcd-pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.706083    3255 kubelet_node_status.go:73] "Attempting to register node" node="pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: E0731 21:17:18.706963    3255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.123:8443: connect: connection refused" node="pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.863637    3255 scope.go:117] "RemoveContainer" containerID="c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.864835    3255 scope.go:117] "RemoveContainer" containerID="283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.869664    3255 scope.go:117] "RemoveContainer" containerID="e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.870226    3255 scope.go:117] "RemoveContainer" containerID="d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0"
	Jul 31 21:17:19 pause-355751 kubelet[3255]: E0731 21:17:19.013525    3255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-355751?timeout=10s\": dial tcp 192.168.39.123:8443: connect: connection refused" interval="800ms"
	Jul 31 21:17:19 pause-355751 kubelet[3255]: I0731 21:17:19.109566    3255 kubelet_node_status.go:73] "Attempting to register node" node="pause-355751"
	Jul 31 21:17:19 pause-355751 kubelet[3255]: E0731 21:17:19.110518    3255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.123:8443: connect: connection refused" node="pause-355751"
	Jul 31 21:17:19 pause-355751 kubelet[3255]: I0731 21:17:19.912340    3255 kubelet_node_status.go:73] "Attempting to register node" node="pause-355751"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.181920    3255 kubelet_node_status.go:112] "Node was previously registered" node="pause-355751"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.182028    3255 kubelet_node_status.go:76] "Successfully registered node" node="pause-355751"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.183600    3255 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.185076    3255 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.384846    3255 apiserver.go:52] "Watching apiserver"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.389166    3255 topology_manager.go:215] "Topology Admit Handler" podUID="1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mmxvr"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.389673    3255 topology_manager.go:215] "Topology Admit Handler" podUID="12d54d0f-6c0e-4234-a2b1-04a55f854cc5" podNamespace="kube-system" podName="kube-proxy-5gxch"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.410826    3255 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.507012    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d54d0f-6c0e-4234-a2b1-04a55f854cc5-xtables-lock\") pod \"kube-proxy-5gxch\" (UID: \"12d54d0f-6c0e-4234-a2b1-04a55f854cc5\") " pod="kube-system/kube-proxy-5gxch"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.507367    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d54d0f-6c0e-4234-a2b1-04a55f854cc5-lib-modules\") pod \"kube-proxy-5gxch\" (UID: \"12d54d0f-6c0e-4234-a2b1-04a55f854cc5\") " pod="kube-system/kube-proxy-5gxch"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.690874    3255 scope.go:117] "RemoveContainer" containerID="42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.691058    3255 scope.go:117] "RemoveContainer" containerID="3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-355751 -n pause-355751
helpers_test.go:261: (dbg) Run:  kubectl --context pause-355751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-355751 -n pause-355751
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-355751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-355751 logs -n 25: (1.422448347s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-605794 sudo cat              | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo cat              | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo                  | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo                  | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo                  | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo find             | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-605794 sudo crio             | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-605794                       | cilium-605794             | jenkins | v1.33.1 | 31 Jul 24 21:14 UTC | 31 Jul 24 21:14 UTC |
	| start   | -p force-systemd-flag-406944           | force-systemd-flag-406944 | jenkins | v1.33.1 | 31 Jul 24 21:15 UTC | 31 Jul 24 21:16 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:15 UTC | 31 Jul 24 21:16 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-084648              | running-upgrade-084648    | jenkins | v1.33.1 | 31 Jul 24 21:15 UTC | 31 Jul 24 21:17 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:16 UTC |
	| start   | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:16 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-355751                        | pause-355751              | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:17 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-406944 ssh cat      | force-systemd-flag-406944 | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:16 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-406944           | force-systemd-flag-406944 | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:16 UTC |
	| start   | -p cert-expiration-238338              | cert-expiration-238338    | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC | 31 Jul 24 21:17 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-081034 sudo            | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:16 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	| start   | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-084648              | running-upgrade-084648    | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	| start   | -p cert-options-425308                 | cert-options-425308       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-081034 sudo            | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-081034                 | NoKubernetes-081034       | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC | 31 Jul 24 21:17 UTC |
	| start   | -p kubernetes-upgrade-202332           | kubernetes-upgrade-202332 | jenkins | v1.33.1 | 31 Jul 24 21:17 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:17:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:17:34.939754 1141656 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:17:34.939864 1141656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:17:34.939869 1141656 out.go:304] Setting ErrFile to fd 2...
	I0731 21:17:34.939873 1141656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:17:34.940087 1141656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:17:34.940794 1141656 out.go:298] Setting JSON to false
	I0731 21:17:34.941970 1141656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18006,"bootTime":1722442649,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:17:34.942043 1141656 start.go:139] virtualization: kvm guest
	I0731 21:17:34.944143 1141656 out.go:177] * [kubernetes-upgrade-202332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:17:34.945463 1141656 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:17:34.945489 1141656 notify.go:220] Checking for updates...
	I0731 21:17:34.948015 1141656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:17:34.949284 1141656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:17:34.950603 1141656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:17:34.951789 1141656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:17:34.953035 1141656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:17:34.954861 1141656 config.go:182] Loaded profile config "cert-expiration-238338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.954981 1141656 config.go:182] Loaded profile config "cert-options-425308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.955119 1141656 config.go:182] Loaded profile config "pause-355751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:17:34.955241 1141656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:17:34.996580 1141656 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:17:34.998984 1141656 start.go:297] selected driver: kvm2
	I0731 21:17:34.999025 1141656 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:17:34.999057 1141656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:17:35.000305 1141656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:17:35.000414 1141656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:17:35.018244 1141656 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:17:35.018319 1141656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:17:35.018619 1141656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 21:17:35.018683 1141656 cni.go:84] Creating CNI manager for ""
	I0731 21:17:35.018699 1141656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:17:35.018708 1141656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:17:35.018791 1141656 start.go:340] cluster config:
	{Name:kubernetes-upgrade-202332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-202332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:17:35.018922 1141656 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:17:35.020671 1141656 out.go:177] * Starting "kubernetes-upgrade-202332" primary control-plane node in "kubernetes-upgrade-202332" cluster
	I0731 21:17:32.938104 1140361 addons.go:510] duration metric: took 3.030079ms for enable addons: enabled=[]
	I0731 21:17:32.938148 1140361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:17:33.096118 1140361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:17:33.116269 1140361 node_ready.go:35] waiting up to 6m0s for node "pause-355751" to be "Ready" ...
	I0731 21:17:33.119402 1140361 node_ready.go:49] node "pause-355751" has status "Ready":"True"
	I0731 21:17:33.119437 1140361 node_ready.go:38] duration metric: took 3.119387ms for node "pause-355751" to be "Ready" ...
	I0731 21:17:33.119452 1140361 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:17:33.124204 1140361 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mmxvr" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:33.493877 1140361 pod_ready.go:92] pod "coredns-7db6d8ff4d-mmxvr" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:33.493914 1140361 pod_ready.go:81] duration metric: took 369.684776ms for pod "coredns-7db6d8ff4d-mmxvr" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:33.493929 1140361 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:33.894061 1140361 pod_ready.go:92] pod "etcd-pause-355751" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:33.894093 1140361 pod_ready.go:81] duration metric: took 400.154155ms for pod "etcd-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:33.894111 1140361 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:34.300651 1140361 pod_ready.go:92] pod "kube-apiserver-pause-355751" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:34.300688 1140361 pod_ready.go:81] duration metric: took 406.567042ms for pod "kube-apiserver-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:34.300703 1140361 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:34.693593 1140361 pod_ready.go:92] pod "kube-controller-manager-pause-355751" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:34.693624 1140361 pod_ready.go:81] duration metric: took 392.913016ms for pod "kube-controller-manager-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:34.693636 1140361 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5gxch" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:35.093276 1140361 pod_ready.go:92] pod "kube-proxy-5gxch" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:35.093308 1140361 pod_ready.go:81] duration metric: took 399.664326ms for pod "kube-proxy-5gxch" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:35.093320 1140361 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:35.494032 1140361 pod_ready.go:92] pod "kube-scheduler-pause-355751" in "kube-system" namespace has status "Ready":"True"
	I0731 21:17:35.494061 1140361 pod_ready.go:81] duration metric: took 400.731976ms for pod "kube-scheduler-pause-355751" in "kube-system" namespace to be "Ready" ...
	I0731 21:17:35.494072 1140361 pod_ready.go:38] duration metric: took 2.374606737s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:17:35.494093 1140361 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:17:35.494169 1140361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:17:35.508361 1140361 api_server.go:72] duration metric: took 2.57333187s to wait for apiserver process to appear ...
	I0731 21:17:35.508386 1140361 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:17:35.508407 1140361 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I0731 21:17:35.513460 1140361 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I0731 21:17:35.514756 1140361 api_server.go:141] control plane version: v1.30.3
	I0731 21:17:35.514785 1140361 api_server.go:131] duration metric: took 6.391832ms to wait for apiserver health ...
	I0731 21:17:35.514795 1140361 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:17:35.696362 1140361 system_pods.go:59] 6 kube-system pods found
	I0731 21:17:35.696395 1140361 system_pods.go:61] "coredns-7db6d8ff4d-mmxvr" [1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5] Running
	I0731 21:17:35.696399 1140361 system_pods.go:61] "etcd-pause-355751" [77fecdb8-f837-4a6a-ae63-3b5674e1deab] Running
	I0731 21:17:35.696403 1140361 system_pods.go:61] "kube-apiserver-pause-355751" [fa097351-c7a0-42d0-a7de-49c912822a8e] Running
	I0731 21:17:35.696410 1140361 system_pods.go:61] "kube-controller-manager-pause-355751" [ee8b6672-0856-4e5f-9a5d-5e829641fce5] Running
	I0731 21:17:35.696413 1140361 system_pods.go:61] "kube-proxy-5gxch" [12d54d0f-6c0e-4234-a2b1-04a55f854cc5] Running
	I0731 21:17:35.696416 1140361 system_pods.go:61] "kube-scheduler-pause-355751" [2e330208-70c3-409c-891e-4cc48386f8f9] Running
	I0731 21:17:35.696423 1140361 system_pods.go:74] duration metric: took 181.622041ms to wait for pod list to return data ...
	I0731 21:17:35.696430 1140361 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:17:35.893791 1140361 default_sa.go:45] found service account: "default"
	I0731 21:17:35.893820 1140361 default_sa.go:55] duration metric: took 197.383056ms for default service account to be created ...
	I0731 21:17:35.893830 1140361 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:17:36.096766 1140361 system_pods.go:86] 6 kube-system pods found
	I0731 21:17:36.096807 1140361 system_pods.go:89] "coredns-7db6d8ff4d-mmxvr" [1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5] Running
	I0731 21:17:36.096816 1140361 system_pods.go:89] "etcd-pause-355751" [77fecdb8-f837-4a6a-ae63-3b5674e1deab] Running
	I0731 21:17:36.096822 1140361 system_pods.go:89] "kube-apiserver-pause-355751" [fa097351-c7a0-42d0-a7de-49c912822a8e] Running
	I0731 21:17:36.096829 1140361 system_pods.go:89] "kube-controller-manager-pause-355751" [ee8b6672-0856-4e5f-9a5d-5e829641fce5] Running
	I0731 21:17:36.096835 1140361 system_pods.go:89] "kube-proxy-5gxch" [12d54d0f-6c0e-4234-a2b1-04a55f854cc5] Running
	I0731 21:17:36.096841 1140361 system_pods.go:89] "kube-scheduler-pause-355751" [2e330208-70c3-409c-891e-4cc48386f8f9] Running
	I0731 21:17:36.096852 1140361 system_pods.go:126] duration metric: took 203.014702ms to wait for k8s-apps to be running ...
	I0731 21:17:36.096861 1140361 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:17:36.096921 1140361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:17:36.113677 1140361 system_svc.go:56] duration metric: took 16.799579ms WaitForService to wait for kubelet
	I0731 21:17:36.113714 1140361 kubeadm.go:582] duration metric: took 3.178692026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:17:36.113735 1140361 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:17:36.293791 1140361 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:17:36.293824 1140361 node_conditions.go:123] node cpu capacity is 2
	I0731 21:17:36.293836 1140361 node_conditions.go:105] duration metric: took 180.096646ms to run NodePressure ...
	I0731 21:17:36.293848 1140361 start.go:241] waiting for startup goroutines ...
	I0731 21:17:36.293855 1140361 start.go:246] waiting for cluster config update ...
	I0731 21:17:36.293862 1140361 start.go:255] writing updated cluster config ...
	I0731 21:17:36.294212 1140361 ssh_runner.go:195] Run: rm -f paused
	I0731 21:17:36.361937 1140361 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:17:36.363931 1140361 out.go:177] * Done! kubectl is now configured to use "pause-355751" cluster and "default" namespace by default
	I0731 21:17:34.165026 1141329 main.go:141] libmachine: (cert-options-425308) Waiting to get IP...
	I0731 21:17:34.165878 1141329 main.go:141] libmachine: (cert-options-425308) DBG | domain cert-options-425308 has defined MAC address 52:54:00:ce:53:7e in network mk-cert-options-425308
	I0731 21:17:34.166287 1141329 main.go:141] libmachine: (cert-options-425308) DBG | unable to find current IP address of domain cert-options-425308 in network mk-cert-options-425308
	I0731 21:17:34.166308 1141329 main.go:141] libmachine: (cert-options-425308) DBG | I0731 21:17:34.166256 1141468 retry.go:31] will retry after 234.857944ms: waiting for machine to come up
	I0731 21:17:34.403149 1141329 main.go:141] libmachine: (cert-options-425308) DBG | domain cert-options-425308 has defined MAC address 52:54:00:ce:53:7e in network mk-cert-options-425308
	I0731 21:17:34.445420 1141329 main.go:141] libmachine: (cert-options-425308) DBG | unable to find current IP address of domain cert-options-425308 in network mk-cert-options-425308
	I0731 21:17:34.445440 1141329 main.go:141] libmachine: (cert-options-425308) DBG | I0731 21:17:34.445271 1141468 retry.go:31] will retry after 301.23504ms: waiting for machine to come up
	I0731 21:17:34.823761 1141329 main.go:141] libmachine: (cert-options-425308) DBG | domain cert-options-425308 has defined MAC address 52:54:00:ce:53:7e in network mk-cert-options-425308
	I0731 21:17:34.824294 1141329 main.go:141] libmachine: (cert-options-425308) DBG | unable to find current IP address of domain cert-options-425308 in network mk-cert-options-425308
	I0731 21:17:34.824313 1141329 main.go:141] libmachine: (cert-options-425308) DBG | I0731 21:17:34.824244 1141468 retry.go:31] will retry after 480.72564ms: waiting for machine to come up
	I0731 21:17:35.306671 1141329 main.go:141] libmachine: (cert-options-425308) DBG | domain cert-options-425308 has defined MAC address 52:54:00:ce:53:7e in network mk-cert-options-425308
	I0731 21:17:35.307119 1141329 main.go:141] libmachine: (cert-options-425308) DBG | unable to find current IP address of domain cert-options-425308 in network mk-cert-options-425308
	I0731 21:17:35.307138 1141329 main.go:141] libmachine: (cert-options-425308) DBG | I0731 21:17:35.307071 1141468 retry.go:31] will retry after 380.687516ms: waiting for machine to come up
	I0731 21:17:35.689879 1141329 main.go:141] libmachine: (cert-options-425308) DBG | domain cert-options-425308 has defined MAC address 52:54:00:ce:53:7e in network mk-cert-options-425308
	I0731 21:17:35.690709 1141329 main.go:141] libmachine: (cert-options-425308) DBG | unable to find current IP address of domain cert-options-425308 in network mk-cert-options-425308
	I0731 21:17:35.690729 1141329 main.go:141] libmachine: (cert-options-425308) DBG | I0731 21:17:35.690666 1141468 retry.go:31] will retry after 537.833966ms: waiting for machine to come up
	I0731 21:17:36.230512 1141329 main.go:141] libmachine: (cert-options-425308) DBG | domain cert-options-425308 has defined MAC address 52:54:00:ce:53:7e in network mk-cert-options-425308
	I0731 21:17:36.231089 1141329 main.go:141] libmachine: (cert-options-425308) DBG | unable to find current IP address of domain cert-options-425308 in network mk-cert-options-425308
	I0731 21:17:36.231115 1141329 main.go:141] libmachine: (cert-options-425308) DBG | I0731 21:17:36.231048 1141468 retry.go:31] will retry after 921.837364ms: waiting for machine to come up
	I0731 21:17:37.154757 1141329 main.go:141] libmachine: (cert-options-425308) DBG | domain cert-options-425308 has defined MAC address 52:54:00:ce:53:7e in network mk-cert-options-425308
	I0731 21:17:37.155263 1141329 main.go:141] libmachine: (cert-options-425308) DBG | unable to find current IP address of domain cert-options-425308 in network mk-cert-options-425308
	I0731 21:17:37.155282 1141329 main.go:141] libmachine: (cert-options-425308) DBG | I0731 21:17:37.155218 1141468 retry.go:31] will retry after 963.609099ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.212447691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460659212411323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae772997-e647-4891-9fd4-c995e12f964f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.213263128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afbaeeee-f56c-4d16-879f-bbf66b5b1ab1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.213350968Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afbaeeee-f56c-4d16-879f-bbf66b5b1ab1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.213678102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722460642715723078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722460642710812165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722460638896838272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annot
ations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722460638907064083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722460638887324163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernet
es.container.hash: faa820ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722460638898923694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722460613737816561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86
780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722460613436253243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722460613374954825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722460613327724568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernetes.container.hash: faa820ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722460613300969090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722460613106249626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annotations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afbaeeee-f56c-4d16-879f-bbf66b5b1ab1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.266374654Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5633a082-406c-4ecc-88bb-5f90227db8aa name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.266849786Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5633a082-406c-4ecc-88bb-5f90227db8aa name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.270335942Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=feec62c6-63a7-429b-ab7b-9f64fb44fef3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.270936102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460659270897326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=feec62c6-63a7-429b-ab7b-9f64fb44fef3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.272116938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16c2c655-c478-47af-b0b6-bc8a6de79565 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.272276944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16c2c655-c478-47af-b0b6-bc8a6de79565 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.272629082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722460642715723078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722460642710812165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722460638896838272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annot
ations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722460638907064083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722460638887324163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernet
es.container.hash: faa820ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722460638898923694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722460613737816561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86
780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722460613436253243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722460613374954825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722460613327724568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernetes.container.hash: faa820ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722460613300969090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722460613106249626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annotations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16c2c655-c478-47af-b0b6-bc8a6de79565 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.320142963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbfcf4f6-7c34-46ca-a99e-c2e251b6083c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.320214675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbfcf4f6-7c34-46ca-a99e-c2e251b6083c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.321561868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=795ebbcd-d915-49bb-b52e-4e06d17b0e48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.322016966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460659321994003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=795ebbcd-d915-49bb-b52e-4e06d17b0e48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.322792685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7f7a072-7eab-4154-b3ea-166486028ea0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.322850383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7f7a072-7eab-4154-b3ea-166486028ea0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.323095507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722460642715723078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722460642710812165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722460638896838272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annot
ations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722460638907064083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722460638887324163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernet
es.container.hash: faa820ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722460638898923694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722460613737816561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86
780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722460613436253243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722460613374954825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722460613327724568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernetes.container.hash: faa820ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722460613300969090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722460613106249626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annotations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7f7a072-7eab-4154-b3ea-166486028ea0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.377678905Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f98eaea7-e47d-41d1-9ee8-52267a665dd9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.377802842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f98eaea7-e47d-41d1-9ee8-52267a665dd9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.379322138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12987b0a-a3f3-4baf-998c-d64d3b9329cd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.379840759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460659379721139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12987b0a-a3f3-4baf-998c-d64d3b9329cd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.380483380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c55b726-074a-4c93-85fd-e0883e492258 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.380556665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c55b726-074a-4c93-85fd-e0883e492258 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:17:39 pause-355751 crio[2242]: time="2024-07-31 21:17:39.380850035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722460642715723078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722460642710812165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722460638896838272,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annot
ations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722460638907064083,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]
string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722460638887324163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernet
es.container.hash: faa820ff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722460638898923694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io
.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9,PodSandboxId:7b41ba04371fba0d2a00d7c60ee00523ce03156a2fdd91bed4747712a6c51711,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722460613737816561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mmxvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5,},Annotations:map[string]string{io.kubernetes.container.hash: ff86
780c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595,PodSandboxId:7dc721d4d210212f79f248c505003fbe7106d213d502c219268bb90d8c6f1194,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722460613436253243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-5gxch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d54d0f-6c0e-4234-a2b1-04a55f854cc5,},Annotations:map[string]string{io.kubernetes.container.hash: ae83b304,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0,PodSandboxId:41c6ec046bff1716ffd74b15769d736b8376b76f127d34b2fcd775e2570dae35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722460613374954825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-355751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01964e9c7b90161628e825d3e6c3138,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b,PodSandboxId:c6b505336a6b0275170629dbe1ef0984703c5d4f919784ffd7b52edafd26012c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722460613327724568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-355751,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920bf06bb7fb0e9a383c3699653c09e2,},Annotations:map[string]string{io.kubernetes.container.hash: faa820ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e,PodSandboxId:384b594b160545f1acf2395a19cb0ab271055c236d42035c85aeedc2c09c2ac9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722460613300969090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-355751,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1555b141973e714407462de2a0cd7cb,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d,PodSandboxId:85706cead17d8413742b7e2faf871fbfac342189c103aeb1abd6d9fbbcd60488,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722460613106249626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-355751,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 736c8ddf630aaaf8e7cf6c539aaecc56,},Annotations:map[string]string{io.kubernetes.container.hash: 63b04b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c55b726-074a-4c93-85fd-e0883e492258 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51acdff4be1b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 seconds ago      Running             coredns                   2                   7b41ba04371fb       coredns-7db6d8ff4d-mmxvr
	dd6c692486669       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 seconds ago      Running             kube-proxy                2                   7dc721d4d2102       kube-proxy-5gxch
	3cee4515db0ba       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   20 seconds ago      Running             kube-scheduler            2                   41c6ec046bff1       kube-scheduler-pause-355751
	b72e38826aabf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   20 seconds ago      Running             kube-controller-manager   2                   384b594b16054       kube-controller-manager-pause-355751
	def11b9db4f10       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   20 seconds ago      Running             etcd                      2                   85706cead17d8       etcd-pause-355751
	dee0fd326ad4a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   20 seconds ago      Running             kube-apiserver            2                   c6b505336a6b0       kube-apiserver-pause-355751
	3700334b2c0a2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   45 seconds ago      Exited              coredns                   1                   7b41ba04371fb       coredns-7db6d8ff4d-mmxvr
	42127e81d231b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   46 seconds ago      Exited              kube-proxy                1                   7dc721d4d2102       kube-proxy-5gxch
	d6b33a52034f7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   46 seconds ago      Exited              kube-scheduler            1                   41c6ec046bff1       kube-scheduler-pause-355751
	283c006e5fba0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   46 seconds ago      Exited              kube-apiserver            1                   c6b505336a6b0       kube-apiserver-pause-355751
	e88a66dc8ea8a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   46 seconds ago      Exited              kube-controller-manager   1                   384b594b16054       kube-controller-manager-pause-355751
	c3d4757f05185       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   46 seconds ago      Exited              etcd                      1                   85706cead17d8       etcd-pause-355751
	
	
	==> coredns [3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:57238 - 46564 "HINFO IN 1929438316946666935.7327791819667359449. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.101705404s
	
	
	==> coredns [51acdff4be1b762911247623a0e5cc602c356f72b37be3813a5937ce10928db3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56175 - 45202 "HINFO IN 3506816275912152322.1820623102528801169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020669567s
	
	
	==> describe nodes <==
	Name:               pause-355751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-355751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=pause-355751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_15_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:15:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-355751
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:17:22 +0000   Wed, 31 Jul 2024 21:15:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:17:22 +0000   Wed, 31 Jul 2024 21:15:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:17:22 +0000   Wed, 31 Jul 2024 21:15:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:17:22 +0000   Wed, 31 Jul 2024 21:15:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    pause-355751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 673f805b8f9847568a9c148fe16f391e
	  System UUID:                673f805b-8f98-4756-8a9c-148fe16f391e
	  Boot ID:                    8d345fc6-e35f-491c-a683-20c76749cc5f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-mmxvr                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     91s
	  kube-system                 etcd-pause-355751                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kube-apiserver-pause-355751             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-pause-355751    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-proxy-5gxch                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-pause-355751             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 90s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   Starting                 43s                kube-proxy       
	  Normal   NodeHasSufficientMemory  108s               kubelet          Node pause-355751 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  108s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    108s               kubelet          Node pause-355751 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s               kubelet          Node pause-355751 status is now: NodeHasSufficientPID
	  Normal   Starting                 108s               kubelet          Starting kubelet.
	  Normal   NodeReady                107s               kubelet          Node pause-355751 status is now: NodeReady
	  Normal   RegisteredNode           94s                node-controller  Node pause-355751 event: Registered Node pause-355751 in Controller
	  Warning  ContainerGCFailed        48s                kubelet          [rpc error: code = Unavailable desc = error reading from server: read unix @->/var/run/crio/crio.sock: read: connection reset by peer, rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"]
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-355751 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-355751 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-355751 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           5s                 node-controller  Node pause-355751 event: Registered Node pause-355751 in Controller
	
	
	==> dmesg <==
	[  +0.064688] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.174300] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.144788] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.317716] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.466322] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.063886] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.478013] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.629359] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.966945] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.087831] kauditd_printk_skb: 41 callbacks suppressed
	[Jul31 21:16] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[  +0.080744] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.917719] kauditd_printk_skb: 69 callbacks suppressed
	[ +23.262506] systemd-fstab-generator[2155]: Ignoring "noauto" option for root device
	[  +0.151849] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.182057] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.132873] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.278363] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[  +8.973155] systemd-fstab-generator[2352]: Ignoring "noauto" option for root device
	[  +0.082465] kauditd_printk_skb: 100 callbacks suppressed
	[Jul31 21:17] kauditd_printk_skb: 87 callbacks suppressed
	[ +13.073103] systemd-fstab-generator[3248]: Ignoring "noauto" option for root device
	[  +4.592795] kauditd_printk_skb: 41 callbacks suppressed
	[ +10.218504] systemd-fstab-generator[3690]: Ignoring "noauto" option for root device
	[  +0.089025] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d] <==
	{"level":"info","ts":"2024-07-31T21:16:53.806211Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:16:54.990828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:16:54.990976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:16:54.991029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgPreVoteResp from 4c9b6dd9118b591e at term 2"}
	{"level":"info","ts":"2024-07-31T21:16:54.991066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:16:54.991096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgVoteResp from 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-07-31T21:16:54.991124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:16:54.991149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c9b6dd9118b591e elected leader 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-07-31T21:16:54.999976Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4c9b6dd9118b591e","local-member-attributes":"{Name:pause-355751 ClientURLs:[https://192.168.39.123:2379]}","request-path":"/0/members/4c9b6dd9118b591e/attributes","cluster-id":"b780dcaae8448687","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:16:55.00017Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:16:55.000475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:16:55.000648Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:16:55.00068Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:16:55.004397Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2024-07-31T21:16:55.005358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:17:16.222766Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T21:17:16.222837Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-355751","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"]}
	{"level":"warn","ts":"2024-07-31T21:17:16.22295Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:17:16.222975Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:17:16.224521Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.123:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:17:16.2246Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.123:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T21:17:16.224851Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4c9b6dd9118b591e","current-leader-member-id":"4c9b6dd9118b591e"}
	{"level":"info","ts":"2024-07-31T21:17:16.421204Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:17:16.421647Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:17:16.421734Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-355751","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"]}
	
	
	==> etcd [def11b9db4f1026793734e3d4293fac2a4cbe40fd8779b31531b620efb7f43f2] <==
	{"level":"info","ts":"2024-07-31T21:17:19.602084Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:17:19.602121Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:17:19.603654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e switched to configuration voters=(5520126547342350622)"}
	{"level":"info","ts":"2024-07-31T21:17:19.60378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","added-peer-id":"4c9b6dd9118b591e","added-peer-peer-urls":["https://192.168.39.123:2380"]}
	{"level":"info","ts":"2024-07-31T21:17:19.603955Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:17:19.604016Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:17:19.621213Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:17:19.628014Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4c9b6dd9118b591e","initial-advertise-peer-urls":["https://192.168.39.123:2380"],"listen-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:17:19.621866Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:17:19.629808Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-07-31T21:17:19.631827Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:17:20.607723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T21:17:20.607916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:17:20.607977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgPreVoteResp from 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-07-31T21:17:20.608036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T21:17:20.608066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgVoteResp from 4c9b6dd9118b591e at term 4"}
	{"level":"info","ts":"2024-07-31T21:17:20.608101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became leader at term 4"}
	{"level":"info","ts":"2024-07-31T21:17:20.608134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c9b6dd9118b591e elected leader 4c9b6dd9118b591e at term 4"}
	{"level":"info","ts":"2024-07-31T21:17:20.614457Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:17:20.614496Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4c9b6dd9118b591e","local-member-attributes":"{Name:pause-355751 ClientURLs:[https://192.168.39.123:2379]}","request-path":"/0/members/4c9b6dd9118b591e/attributes","cluster-id":"b780dcaae8448687","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:17:20.614649Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:17:20.616359Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2024-07-31T21:17:20.617579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:17:20.617823Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:17:20.617851Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:17:39 up 2 min,  0 users,  load average: 1.74, 0.70, 0.26
	Linux pause-355751 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b] <==
	I0731 21:17:04.513443       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0731 21:17:04.513514       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0731 21:17:04.513561       1 available_controller.go:439] Shutting down AvailableConditionController
	I0731 21:17:04.513568       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0731 21:17:04.513577       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0731 21:17:04.513583       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0731 21:17:04.513589       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0731 21:17:04.513593       1 naming_controller.go:302] Shutting down NamingConditionController
	I0731 21:17:04.513600       1 controller.go:167] Shutting down OpenAPI controller
	I0731 21:17:04.513607       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0731 21:17:04.513650       1 controller.go:157] Shutting down quota evaluator
	I0731 21:17:04.515414       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.514505       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0731 21:17:04.514543       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0731 21:17:04.515455       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.515463       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.515468       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.515472       1 controller.go:176] quota evaluator worker shutdown
	I0731 21:17:04.514548       1 establishing_controller.go:87] Shutting down EstablishingController
	I0731 21:17:04.514556       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0731 21:17:04.513226       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0731 21:17:04.515530       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0731 21:17:04.518863       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 21:17:04.515002       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 21:17:04.515083       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-apiserver [dee0fd326ad4a1e0e1f077d7ca97d8626ad23fa4b4a24c09fbff7fa501cf61f2] <==
	I0731 21:17:22.128240       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 21:17:22.135104       1 aggregator.go:165] initial CRD sync complete...
	I0731 21:17:22.135140       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 21:17:22.135149       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 21:17:22.142858       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 21:17:22.142897       1 policy_source.go:224] refreshing policies
	I0731 21:17:22.219081       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 21:17:22.219088       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 21:17:22.226845       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 21:17:22.227690       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 21:17:22.228286       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 21:17:22.228329       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 21:17:22.228451       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 21:17:22.229630       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 21:17:22.232792       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 21:17:22.235415       1 cache.go:39] Caches are synced for autoregister controller
	I0731 21:17:23.030200       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 21:17:23.336833       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.123]
	I0731 21:17:23.338247       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 21:17:23.344210       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 21:17:23.697686       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 21:17:23.710430       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 21:17:23.757412       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 21:17:23.807593       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 21:17:23.820440       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [b72e38826aabf8d7e87effc71bd545e91f5e15db5338fc66aa9263371ab79b73] <==
	I0731 21:17:34.376181       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 21:17:34.376352       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 21:17:34.379296       1 shared_informer.go:320] Caches are synced for job
	I0731 21:17:34.385927       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0731 21:17:34.386076       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 21:17:34.386127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.151µs"
	I0731 21:17:34.388846       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 21:17:34.389412       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 21:17:34.389474       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 21:17:34.389481       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 21:17:34.389490       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 21:17:34.391967       1 shared_informer.go:320] Caches are synced for taint
	I0731 21:17:34.392151       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0731 21:17:34.392250       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-355751"
	I0731 21:17:34.392396       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 21:17:34.394821       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 21:17:34.396727       1 shared_informer.go:320] Caches are synced for GC
	I0731 21:17:34.398261       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0731 21:17:34.449656       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 21:17:34.494904       1 shared_informer.go:320] Caches are synced for disruption
	I0731 21:17:34.572884       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:17:34.578090       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:17:34.983092       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 21:17:34.983148       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 21:17:35.013347       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e] <==
	I0731 21:16:59.323483       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0731 21:16:59.323632       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0731 21:16:59.323721       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0731 21:16:59.372528       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0731 21:16:59.372662       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	E0731 21:16:59.423376       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0731 21:16:59.423417       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0731 21:16:59.473211       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0731 21:16:59.473351       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0731 21:16:59.473330       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0731 21:16:59.473454       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0731 21:16:59.523360       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0731 21:16:59.523594       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0731 21:16:59.523682       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0731 21:16:59.572908       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0731 21:16:59.573197       1 stateful_set.go:160] "Starting stateful set controller" logger="statefulset-controller"
	I0731 21:16:59.573240       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0731 21:16:59.623620       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0731 21:16:59.623825       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0731 21:16:59.624007       1 shared_informer.go:313] Waiting for caches to sync for TTL
	W0731 21:17:09.674499       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.123:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.123:8443: connect: connection refused
	W0731 21:17:10.175317       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.123:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.123:8443: connect: connection refused
	W0731 21:17:11.176120       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.123:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.123:8443: connect: connection refused
	W0731 21:17:13.177482       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.123:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.123:8443: connect: connection refused
	E0731 21:17:13.177610       1 cidr_allocator.go:146] "Failed to list all nodes" err="Get \"https://192.168.39.123:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-ipam-controller"
	
	
	==> kube-proxy [42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595] <==
	I0731 21:16:54.620895       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:16:56.654290       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	I0731 21:16:56.696522       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:16:56.696624       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:16:56.696659       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:16:56.699018       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:16:56.699243       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:16:56.699496       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:16:56.700591       1 config.go:192] "Starting service config controller"
	I0731 21:16:56.700841       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:16:56.700975       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:16:56.701040       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:16:56.701553       1 config.go:319] "Starting node config controller"
	I0731 21:16:56.704646       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:16:56.801589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:16:56.801602       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:16:56.806956       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [dd6c692486669718a6f871c392ff95c010733e0c934afa5f1e992a2f427150fb] <==
	I0731 21:17:22.850111       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:17:22.869184       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	I0731 21:17:22.903628       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:17:22.903810       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:17:22.903871       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:17:22.906500       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:17:22.906694       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:17:22.906978       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:17:22.908711       1 config.go:192] "Starting service config controller"
	I0731 21:17:22.909010       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:17:22.909082       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:17:22.909111       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:17:22.910200       1 config.go:319] "Starting node config controller"
	I0731 21:17:22.910312       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:17:23.009502       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:17:23.009665       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:17:23.010572       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3cee4515db0bae79334b193893dd969e5e270f8b36170311194a407caf2bbfdb] <==
	I0731 21:17:20.132400       1 serving.go:380] Generated self-signed cert in-memory
	W0731 21:17:22.123185       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:17:22.123273       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:17:22.123303       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:17:22.123327       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:17:22.142634       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:17:22.142879       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:17:22.148266       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:17:22.148313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:17:22.148427       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:17:22.148538       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:17:22.249553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0] <==
	I0731 21:16:55.564394       1 serving.go:380] Generated self-signed cert in-memory
	W0731 21:16:56.581411       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:16:56.581537       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:16:56.581567       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:16:56.581622       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:16:56.637263       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:16:56.638696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:16:56.654230       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:16:56.654478       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:16:56.654536       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:16:56.654577       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:16:56.755112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 21:17:04.358520       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.620383    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1555b141973e714407462de2a0cd7cb-ca-certs\") pod \"kube-controller-manager-pause-355751\" (UID: \"c1555b141973e714407462de2a0cd7cb\") " pod="kube-system/kube-controller-manager-pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.620425    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1555b141973e714407462de2a0cd7cb-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-355751\" (UID: \"c1555b141973e714407462de2a0cd7cb\") " pod="kube-system/kube-controller-manager-pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.620484    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/736c8ddf630aaaf8e7cf6c539aaecc56-etcd-data\") pod \"etcd-pause-355751\" (UID: \"736c8ddf630aaaf8e7cf6c539aaecc56\") " pod="kube-system/etcd-pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.706083    3255 kubelet_node_status.go:73] "Attempting to register node" node="pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: E0731 21:17:18.706963    3255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.123:8443: connect: connection refused" node="pause-355751"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.863637    3255 scope.go:117] "RemoveContainer" containerID="c3d4757f0518595b44edd15d54bf8bee088b2e499e50f2841c1669285d411f0d"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.864835    3255 scope.go:117] "RemoveContainer" containerID="283c006e5fba02ef9c06faeb949c238ad284554c6bc53b502276c2c16251105b"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.869664    3255 scope.go:117] "RemoveContainer" containerID="e88a66dc8ea8a29ba6ca21ba9006562b09f6a368788d0135a20fa983e1ef699e"
	Jul 31 21:17:18 pause-355751 kubelet[3255]: I0731 21:17:18.870226    3255 scope.go:117] "RemoveContainer" containerID="d6b33a52034f7de076d851c66d61544ba205d223340fb41217afeac4bbc368d0"
	Jul 31 21:17:19 pause-355751 kubelet[3255]: E0731 21:17:19.013525    3255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-355751?timeout=10s\": dial tcp 192.168.39.123:8443: connect: connection refused" interval="800ms"
	Jul 31 21:17:19 pause-355751 kubelet[3255]: I0731 21:17:19.109566    3255 kubelet_node_status.go:73] "Attempting to register node" node="pause-355751"
	Jul 31 21:17:19 pause-355751 kubelet[3255]: E0731 21:17:19.110518    3255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.123:8443: connect: connection refused" node="pause-355751"
	Jul 31 21:17:19 pause-355751 kubelet[3255]: I0731 21:17:19.912340    3255 kubelet_node_status.go:73] "Attempting to register node" node="pause-355751"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.181920    3255 kubelet_node_status.go:112] "Node was previously registered" node="pause-355751"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.182028    3255 kubelet_node_status.go:76] "Successfully registered node" node="pause-355751"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.183600    3255 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.185076    3255 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.384846    3255 apiserver.go:52] "Watching apiserver"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.389166    3255 topology_manager.go:215] "Topology Admit Handler" podUID="1c6f3c03-d0ff-46aa-8f1e-8ed8bcfde2b5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mmxvr"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.389673    3255 topology_manager.go:215] "Topology Admit Handler" podUID="12d54d0f-6c0e-4234-a2b1-04a55f854cc5" podNamespace="kube-system" podName="kube-proxy-5gxch"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.410826    3255 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.507012    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d54d0f-6c0e-4234-a2b1-04a55f854cc5-xtables-lock\") pod \"kube-proxy-5gxch\" (UID: \"12d54d0f-6c0e-4234-a2b1-04a55f854cc5\") " pod="kube-system/kube-proxy-5gxch"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.507367    3255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d54d0f-6c0e-4234-a2b1-04a55f854cc5-lib-modules\") pod \"kube-proxy-5gxch\" (UID: \"12d54d0f-6c0e-4234-a2b1-04a55f854cc5\") " pod="kube-system/kube-proxy-5gxch"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.690874    3255 scope.go:117] "RemoveContainer" containerID="42127e81d231b2b7fe73dc54b3ab5d78558810eb090726b52dc08b425ba8e595"
	Jul 31 21:17:22 pause-355751 kubelet[3255]: I0731 21:17:22.691058    3255 scope.go:117] "RemoveContainer" containerID="3700334b2c0a240e620f18198a7c8f57b7519bd25d4012e858725bdf449762e9"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-355751 -n pause-355751
helpers_test.go:261: (dbg) Run:  kubectl --context pause-355751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (79.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (296.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-275462 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-275462 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m56.429274835s)

                                                
                                                
-- stdout --
	* [old-k8s-version-275462] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-275462" primary control-plane node in "old-k8s-version-275462" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:18:16.490368 1142911 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:18:16.490480 1142911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:18:16.490490 1142911 out.go:304] Setting ErrFile to fd 2...
	I0731 21:18:16.490496 1142911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:18:16.490699 1142911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:18:16.491336 1142911 out.go:298] Setting JSON to false
	I0731 21:18:16.492508 1142911 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18047,"bootTime":1722442649,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:18:16.492576 1142911 start.go:139] virtualization: kvm guest
	I0731 21:18:16.494714 1142911 out.go:177] * [old-k8s-version-275462] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:18:16.496016 1142911 notify.go:220] Checking for updates...
	I0731 21:18:16.496039 1142911 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:18:16.497221 1142911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:18:16.498362 1142911 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:18:16.499512 1142911 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:18:16.500708 1142911 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:18:16.502037 1142911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:18:16.503576 1142911 config.go:182] Loaded profile config "cert-expiration-238338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:18:16.503669 1142911 config.go:182] Loaded profile config "kubernetes-upgrade-202332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:18:16.503754 1142911 config.go:182] Loaded profile config "stopped-upgrade-140201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 21:18:16.503865 1142911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:18:16.542965 1142911 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:18:16.544277 1142911 start.go:297] selected driver: kvm2
	I0731 21:18:16.544297 1142911 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:18:16.544311 1142911 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:18:16.545120 1142911 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:18:16.545216 1142911 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:18:16.561820 1142911 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:18:16.561910 1142911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:18:16.562129 1142911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:18:16.562176 1142911 cni.go:84] Creating CNI manager for ""
	I0731 21:18:16.562186 1142911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:18:16.562194 1142911 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:18:16.562265 1142911 start.go:340] cluster config:
	{Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:18:16.562376 1142911 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:18:16.564033 1142911 out.go:177] * Starting "old-k8s-version-275462" primary control-plane node in "old-k8s-version-275462" cluster
	I0731 21:18:16.565167 1142911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:18:16.565217 1142911 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:18:16.565233 1142911 cache.go:56] Caching tarball of preloaded images
	I0731 21:18:16.565354 1142911 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:18:16.565369 1142911 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 21:18:16.565491 1142911 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:18:16.565517 1142911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json: {Name:mk1a846e46c0a90a434dfed9f52cde42ac5e726f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:16.565681 1142911 start.go:360] acquireMachinesLock for old-k8s-version-275462: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:18:43.868645 1142911 start.go:364] duration metric: took 27.302907414s to acquireMachinesLock for "old-k8s-version-275462"
	I0731 21:18:43.868781 1142911 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:18:43.868931 1142911 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 21:18:43.871694 1142911 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 21:18:43.871929 1142911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:18:43.872010 1142911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:18:43.893067 1142911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0731 21:18:43.893691 1142911 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:18:43.894355 1142911 main.go:141] libmachine: Using API Version  1
	I0731 21:18:43.894387 1142911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:18:43.894774 1142911 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:18:43.894998 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:18:43.895160 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:18:43.895329 1142911 start.go:159] libmachine.API.Create for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:18:43.895369 1142911 client.go:168] LocalClient.Create starting
	I0731 21:18:43.895411 1142911 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 21:18:43.895460 1142911 main.go:141] libmachine: Decoding PEM data...
	I0731 21:18:43.895486 1142911 main.go:141] libmachine: Parsing certificate...
	I0731 21:18:43.895561 1142911 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 21:18:43.895592 1142911 main.go:141] libmachine: Decoding PEM data...
	I0731 21:18:43.895616 1142911 main.go:141] libmachine: Parsing certificate...
	I0731 21:18:43.895642 1142911 main.go:141] libmachine: Running pre-create checks...
	I0731 21:18:43.895655 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .PreCreateCheck
	I0731 21:18:43.896067 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:18:43.896615 1142911 main.go:141] libmachine: Creating machine...
	I0731 21:18:43.896633 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .Create
	I0731 21:18:43.896806 1142911 main.go:141] libmachine: (old-k8s-version-275462) Creating KVM machine...
	I0731 21:18:43.898213 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found existing default KVM network
	I0731 21:18:43.899677 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:43.899474 1143286 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:e4:80} reservation:<nil>}
	I0731 21:18:43.900532 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:43.900449 1143286 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:81:d5} reservation:<nil>}
	I0731 21:18:43.901556 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:43.901441 1143286 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:e9:da:0c} reservation:<nil>}
	I0731 21:18:43.902748 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:43.902605 1143286 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030f630}
	I0731 21:18:43.902789 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | created network xml: 
	I0731 21:18:43.902801 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | <network>
	I0731 21:18:43.902815 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |   <name>mk-old-k8s-version-275462</name>
	I0731 21:18:43.902825 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |   <dns enable='no'/>
	I0731 21:18:43.902837 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |   
	I0731 21:18:43.902846 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0731 21:18:43.902856 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |     <dhcp>
	I0731 21:18:43.902865 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0731 21:18:43.902875 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |     </dhcp>
	I0731 21:18:43.902889 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |   </ip>
	I0731 21:18:43.902901 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG |   
	I0731 21:18:43.902912 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | </network>
	I0731 21:18:43.902926 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | 
	I0731 21:18:43.908183 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | trying to create private KVM network mk-old-k8s-version-275462 192.168.72.0/24...
	I0731 21:18:43.990702 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | private KVM network mk-old-k8s-version-275462 192.168.72.0/24 created
	I0731 21:18:43.990738 1142911 main.go:141] libmachine: (old-k8s-version-275462) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462 ...
	I0731 21:18:43.990751 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:43.990631 1143286 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:18:43.990773 1142911 main.go:141] libmachine: (old-k8s-version-275462) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 21:18:43.990797 1142911 main.go:141] libmachine: (old-k8s-version-275462) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 21:18:44.265156 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:44.265005 1143286 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa...
	I0731 21:18:44.442960 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:44.442816 1143286 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/old-k8s-version-275462.rawdisk...
	I0731 21:18:44.442991 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Writing magic tar header
	I0731 21:18:44.443005 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Writing SSH key tar header
	I0731 21:18:44.443014 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:44.442952 1143286 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462 ...
	I0731 21:18:44.443132 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462
	I0731 21:18:44.443176 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 21:18:44.443195 1142911 main.go:141] libmachine: (old-k8s-version-275462) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462 (perms=drwx------)
	I0731 21:18:44.443212 1142911 main.go:141] libmachine: (old-k8s-version-275462) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 21:18:44.443232 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:18:44.443253 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 21:18:44.443266 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 21:18:44.443282 1142911 main.go:141] libmachine: (old-k8s-version-275462) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 21:18:44.443301 1142911 main.go:141] libmachine: (old-k8s-version-275462) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 21:18:44.443314 1142911 main.go:141] libmachine: (old-k8s-version-275462) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 21:18:44.443325 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Checking permissions on dir: /home/jenkins
	I0731 21:18:44.443341 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Checking permissions on dir: /home
	I0731 21:18:44.443353 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Skipping /home - not owner
	I0731 21:18:44.443371 1142911 main.go:141] libmachine: (old-k8s-version-275462) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 21:18:44.443386 1142911 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:18:44.444540 1142911 main.go:141] libmachine: (old-k8s-version-275462) define libvirt domain using xml: 
	I0731 21:18:44.444566 1142911 main.go:141] libmachine: (old-k8s-version-275462) <domain type='kvm'>
	I0731 21:18:44.444578 1142911 main.go:141] libmachine: (old-k8s-version-275462)   <name>old-k8s-version-275462</name>
	I0731 21:18:44.444587 1142911 main.go:141] libmachine: (old-k8s-version-275462)   <memory unit='MiB'>2200</memory>
	I0731 21:18:44.444597 1142911 main.go:141] libmachine: (old-k8s-version-275462)   <vcpu>2</vcpu>
	I0731 21:18:44.444613 1142911 main.go:141] libmachine: (old-k8s-version-275462)   <features>
	I0731 21:18:44.444626 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <acpi/>
	I0731 21:18:44.444645 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <apic/>
	I0731 21:18:44.444683 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <pae/>
	I0731 21:18:44.444716 1142911 main.go:141] libmachine: (old-k8s-version-275462)     
	I0731 21:18:44.444727 1142911 main.go:141] libmachine: (old-k8s-version-275462)   </features>
	I0731 21:18:44.444738 1142911 main.go:141] libmachine: (old-k8s-version-275462)   <cpu mode='host-passthrough'>
	I0731 21:18:44.444748 1142911 main.go:141] libmachine: (old-k8s-version-275462)   
	I0731 21:18:44.444758 1142911 main.go:141] libmachine: (old-k8s-version-275462)   </cpu>
	I0731 21:18:44.444769 1142911 main.go:141] libmachine: (old-k8s-version-275462)   <os>
	I0731 21:18:44.444776 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <type>hvm</type>
	I0731 21:18:44.444812 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <boot dev='cdrom'/>
	I0731 21:18:44.444835 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <boot dev='hd'/>
	I0731 21:18:44.444842 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <bootmenu enable='no'/>
	I0731 21:18:44.444855 1142911 main.go:141] libmachine: (old-k8s-version-275462)   </os>
	I0731 21:18:44.444881 1142911 main.go:141] libmachine: (old-k8s-version-275462)   <devices>
	I0731 21:18:44.444897 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <disk type='file' device='cdrom'>
	I0731 21:18:44.444908 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/boot2docker.iso'/>
	I0731 21:18:44.444916 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <target dev='hdc' bus='scsi'/>
	I0731 21:18:44.444923 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <readonly/>
	I0731 21:18:44.444933 1142911 main.go:141] libmachine: (old-k8s-version-275462)     </disk>
	I0731 21:18:44.444945 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <disk type='file' device='disk'>
	I0731 21:18:44.444960 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 21:18:44.444977 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/old-k8s-version-275462.rawdisk'/>
	I0731 21:18:44.444988 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <target dev='hda' bus='virtio'/>
	I0731 21:18:44.444997 1142911 main.go:141] libmachine: (old-k8s-version-275462)     </disk>
	I0731 21:18:44.445006 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <interface type='network'>
	I0731 21:18:44.445017 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <source network='mk-old-k8s-version-275462'/>
	I0731 21:18:44.445032 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <model type='virtio'/>
	I0731 21:18:44.445045 1142911 main.go:141] libmachine: (old-k8s-version-275462)     </interface>
	I0731 21:18:44.445056 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <interface type='network'>
	I0731 21:18:44.445066 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <source network='default'/>
	I0731 21:18:44.445079 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <model type='virtio'/>
	I0731 21:18:44.445091 1142911 main.go:141] libmachine: (old-k8s-version-275462)     </interface>
	I0731 21:18:44.445104 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <serial type='pty'>
	I0731 21:18:44.445120 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <target port='0'/>
	I0731 21:18:44.445134 1142911 main.go:141] libmachine: (old-k8s-version-275462)     </serial>
	I0731 21:18:44.445146 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <console type='pty'>
	I0731 21:18:44.445157 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <target type='serial' port='0'/>
	I0731 21:18:44.445170 1142911 main.go:141] libmachine: (old-k8s-version-275462)     </console>
	I0731 21:18:44.445181 1142911 main.go:141] libmachine: (old-k8s-version-275462)     <rng model='virtio'>
	I0731 21:18:44.445194 1142911 main.go:141] libmachine: (old-k8s-version-275462)       <backend model='random'>/dev/random</backend>
	I0731 21:18:44.445206 1142911 main.go:141] libmachine: (old-k8s-version-275462)     </rng>
	I0731 21:18:44.445215 1142911 main.go:141] libmachine: (old-k8s-version-275462)     
	I0731 21:18:44.445225 1142911 main.go:141] libmachine: (old-k8s-version-275462)     
	I0731 21:18:44.445233 1142911 main.go:141] libmachine: (old-k8s-version-275462)   </devices>
	I0731 21:18:44.445243 1142911 main.go:141] libmachine: (old-k8s-version-275462) </domain>
	I0731 21:18:44.445254 1142911 main.go:141] libmachine: (old-k8s-version-275462) 
	I0731 21:18:44.452674 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:9b:aa:e4 in network default
	I0731 21:18:44.453288 1142911 main.go:141] libmachine: (old-k8s-version-275462) Ensuring networks are active...
	I0731 21:18:44.453316 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:44.454049 1142911 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network default is active
	I0731 21:18:44.454341 1142911 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network mk-old-k8s-version-275462 is active
	I0731 21:18:44.454984 1142911 main.go:141] libmachine: (old-k8s-version-275462) Getting domain xml...
	I0731 21:18:44.455794 1142911 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:18:45.719353 1142911 main.go:141] libmachine: (old-k8s-version-275462) Waiting to get IP...
	I0731 21:18:45.720304 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:45.720782 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:45.720851 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:45.720749 1143286 retry.go:31] will retry after 291.213122ms: waiting for machine to come up
	I0731 21:18:46.013473 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:46.014056 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:46.014090 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:46.013996 1143286 retry.go:31] will retry after 386.192021ms: waiting for machine to come up
	I0731 21:18:46.401463 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:46.402015 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:46.402049 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:46.401959 1143286 retry.go:31] will retry after 421.035983ms: waiting for machine to come up
	I0731 21:18:46.824563 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:46.825093 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:46.825124 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:46.825034 1143286 retry.go:31] will retry after 565.160318ms: waiting for machine to come up
	I0731 21:18:47.391883 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:47.392447 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:47.392478 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:47.392394 1143286 retry.go:31] will retry after 599.948202ms: waiting for machine to come up
	I0731 21:18:47.994407 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:47.994908 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:47.994939 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:47.994852 1143286 retry.go:31] will retry after 913.915511ms: waiting for machine to come up
	I0731 21:18:48.910210 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:48.910837 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:48.910864 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:48.910792 1143286 retry.go:31] will retry after 1.132292543s: waiting for machine to come up
	I0731 21:18:50.045214 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:50.045767 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:50.045789 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:50.045704 1143286 retry.go:31] will retry after 1.24888579s: waiting for machine to come up
	I0731 21:18:51.296261 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:51.296837 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:51.296870 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:51.296776 1143286 retry.go:31] will retry after 1.446102829s: waiting for machine to come up
	I0731 21:18:52.745325 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:52.745869 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:52.745897 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:52.745812 1143286 retry.go:31] will retry after 1.822385142s: waiting for machine to come up
	I0731 21:18:54.570604 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:54.571177 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:54.571231 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:54.571134 1143286 retry.go:31] will retry after 1.785199692s: waiting for machine to come up
	I0731 21:18:56.359981 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:56.360686 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:56.360714 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:56.360621 1143286 retry.go:31] will retry after 2.798943746s: waiting for machine to come up
	I0731 21:18:59.161757 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:18:59.162193 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:18:59.162221 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:18:59.162147 1143286 retry.go:31] will retry after 4.377795826s: waiting for machine to come up
	I0731 21:19:03.541661 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:03.542366 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:19:03.542393 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:19:03.542289 1143286 retry.go:31] will retry after 3.556794784s: waiting for machine to come up
	I0731 21:19:07.101424 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.102192 1142911 main.go:141] libmachine: (old-k8s-version-275462) Found IP for machine: 192.168.72.107
	I0731 21:19:07.102228 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has current primary IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.102236 1142911 main.go:141] libmachine: (old-k8s-version-275462) Reserving static IP address...
	I0731 21:19:07.102746 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"} in network mk-old-k8s-version-275462
	I0731 21:19:07.197597 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Getting to WaitForSSH function...
	I0731 21:19:07.197636 1142911 main.go:141] libmachine: (old-k8s-version-275462) Reserved static IP address: 192.168.72.107
	I0731 21:19:07.197654 1142911 main.go:141] libmachine: (old-k8s-version-275462) Waiting for SSH to be available...
	I0731 21:19:07.201067 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.205157 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:07.205217 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.205769 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH client type: external
	I0731 21:19:07.205807 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa (-rw-------)
	I0731 21:19:07.205838 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:19:07.205848 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | About to run SSH command:
	I0731 21:19:07.205863 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | exit 0
	I0731 21:19:07.336647 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | SSH cmd err, output: <nil>: 
	I0731 21:19:07.336958 1142911 main.go:141] libmachine: (old-k8s-version-275462) KVM machine creation complete!
	I0731 21:19:07.337367 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:19:07.337993 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:19:07.338252 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:19:07.338446 1142911 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 21:19:07.338464 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetState
	I0731 21:19:07.340146 1142911 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 21:19:07.340178 1142911 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 21:19:07.340186 1142911 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 21:19:07.340196 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:07.343204 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.343715 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:07.343751 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.343920 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:07.344155 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.344352 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.344596 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:07.344790 1142911 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:07.345013 1142911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:19:07.345028 1142911 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 21:19:07.447529 1142911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:19:07.447566 1142911 main.go:141] libmachine: Detecting the provisioner...
	I0731 21:19:07.447576 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:07.451154 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.451630 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:07.451666 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.451880 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:07.452116 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.452320 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.452499 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:07.452692 1142911 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:07.452890 1142911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:19:07.452908 1142911 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 21:19:07.561199 1142911 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 21:19:07.561301 1142911 main.go:141] libmachine: found compatible host: buildroot
	I0731 21:19:07.561317 1142911 main.go:141] libmachine: Provisioning with buildroot...
	I0731 21:19:07.561334 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:19:07.561629 1142911 buildroot.go:166] provisioning hostname "old-k8s-version-275462"
	I0731 21:19:07.561662 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:19:07.561900 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:07.565274 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.565704 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:07.565741 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.566060 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:07.566279 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.566495 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.566673 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:07.566884 1142911 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:07.567130 1142911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:19:07.567148 1142911 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275462 && echo "old-k8s-version-275462" | sudo tee /etc/hostname
	I0731 21:19:07.694269 1142911 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275462
	
	I0731 21:19:07.694307 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:07.697568 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.697929 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:07.697961 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.698149 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:07.698426 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.698634 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.698849 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:07.699073 1142911 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:07.699322 1142911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:19:07.699350 1142911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275462/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:19:07.814160 1142911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:19:07.814209 1142911 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:19:07.814243 1142911 buildroot.go:174] setting up certificates
	I0731 21:19:07.814257 1142911 provision.go:84] configureAuth start
	I0731 21:19:07.814275 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:19:07.814642 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:19:07.818019 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.818383 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:07.818407 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.818619 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:07.821373 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.821778 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:07.821819 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.822008 1142911 provision.go:143] copyHostCerts
	I0731 21:19:07.822070 1142911 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:19:07.822081 1142911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:19:07.822129 1142911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:19:07.822253 1142911 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:19:07.822267 1142911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:19:07.822300 1142911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:19:07.822377 1142911 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:19:07.822386 1142911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:19:07.822407 1142911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:19:07.822461 1142911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275462 san=[127.0.0.1 192.168.72.107 localhost minikube old-k8s-version-275462]
	I0731 21:19:07.973286 1142911 provision.go:177] copyRemoteCerts
	I0731 21:19:07.973347 1142911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:19:07.973373 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:07.976392 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.976705 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:07.976733 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:07.976939 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:07.977113 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:07.977236 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:07.977360 1142911 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:19:08.058515 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:19:08.082883 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 21:19:08.106737 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:19:08.130421 1142911 provision.go:87] duration metric: took 316.141693ms to configureAuth
	I0731 21:19:08.130461 1142911 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:19:08.130651 1142911 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:19:08.130824 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:08.133546 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.133927 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:08.133956 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.134144 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:08.134342 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:08.134533 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:08.134644 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:08.134846 1142911 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:08.135021 1142911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:19:08.135035 1142911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:19:08.409282 1142911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:19:08.409312 1142911 main.go:141] libmachine: Checking connection to Docker...
	I0731 21:19:08.409323 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetURL
	I0731 21:19:08.410795 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using libvirt version 6000000
	I0731 21:19:08.414002 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.414448 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:08.414483 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.414818 1142911 main.go:141] libmachine: Docker is up and running!
	I0731 21:19:08.414839 1142911 main.go:141] libmachine: Reticulating splines...
	I0731 21:19:08.414848 1142911 client.go:171] duration metric: took 24.519467967s to LocalClient.Create
	I0731 21:19:08.414877 1142911 start.go:167] duration metric: took 24.519549023s to libmachine.API.Create "old-k8s-version-275462"
	I0731 21:19:08.414892 1142911 start.go:293] postStartSetup for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:19:08.414908 1142911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:19:08.414931 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:19:08.415209 1142911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:19:08.415245 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:08.418208 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.418654 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:08.418684 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.419000 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:08.419277 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:08.419479 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:08.419678 1142911 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:19:08.506836 1142911 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:19:08.511351 1142911 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:19:08.511389 1142911 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:19:08.511469 1142911 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:19:08.511560 1142911 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:19:08.511683 1142911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:19:08.524379 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:19:08.552321 1142911 start.go:296] duration metric: took 137.408186ms for postStartSetup
	I0731 21:19:08.552394 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:19:08.553110 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:19:08.555618 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.556028 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:08.556072 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.556415 1142911 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:19:08.556682 1142911 start.go:128] duration metric: took 24.687735305s to createHost
	I0731 21:19:08.556716 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:08.559927 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.560369 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:08.560396 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.560611 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:08.560838 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:08.561006 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:08.561220 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:08.561444 1142911 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:08.561678 1142911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:19:08.561691 1142911 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 21:19:08.669413 1142911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722460748.646311717
	
	I0731 21:19:08.669442 1142911 fix.go:216] guest clock: 1722460748.646311717
	I0731 21:19:08.669450 1142911 fix.go:229] Guest: 2024-07-31 21:19:08.646311717 +0000 UTC Remote: 2024-07-31 21:19:08.556700259 +0000 UTC m=+52.105769979 (delta=89.611458ms)
	I0731 21:19:08.669497 1142911 fix.go:200] guest clock delta is within tolerance: 89.611458ms
	I0731 21:19:08.669505 1142911 start.go:83] releasing machines lock for "old-k8s-version-275462", held for 24.800776894s
	I0731 21:19:08.669534 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:19:08.669878 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:19:08.673288 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.673752 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:08.673790 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.674025 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:19:08.674739 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:19:08.674982 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:19:08.675095 1142911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:19:08.675147 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:08.675289 1142911 ssh_runner.go:195] Run: cat /version.json
	I0731 21:19:08.675318 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:19:08.678417 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.678649 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.678787 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:08.678819 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.679023 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:08.679091 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:08.679122 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:08.679250 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:08.679309 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:19:08.679389 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:08.679501 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:19:08.679583 1142911 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:19:08.679674 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:19:08.679852 1142911 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:19:08.779069 1142911 ssh_runner.go:195] Run: systemctl --version
	I0731 21:19:08.785842 1142911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:19:08.958580 1142911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:19:08.966694 1142911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:19:08.966787 1142911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:19:08.989354 1142911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:19:08.989384 1142911 start.go:495] detecting cgroup driver to use...
	I0731 21:19:08.989459 1142911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:19:09.010789 1142911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:19:09.026170 1142911 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:19:09.026247 1142911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:19:09.040784 1142911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:19:09.060392 1142911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:19:09.201274 1142911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:19:09.336342 1142911 docker.go:233] disabling docker service ...
	I0731 21:19:09.336430 1142911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:19:09.356663 1142911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:19:09.373741 1142911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:19:09.528594 1142911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:19:09.674671 1142911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:19:09.690015 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:19:09.713874 1142911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:19:09.714023 1142911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:09.728983 1142911 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:19:09.729075 1142911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:09.743194 1142911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:09.755030 1142911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:09.766903 1142911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:19:09.778681 1142911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:19:09.791347 1142911 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:19:09.791448 1142911 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:19:09.810563 1142911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:19:09.823229 1142911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:19:09.943702 1142911 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:19:10.095224 1142911 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:19:10.095328 1142911 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:19:10.100064 1142911 start.go:563] Will wait 60s for crictl version
	I0731 21:19:10.100165 1142911 ssh_runner.go:195] Run: which crictl
	I0731 21:19:10.104196 1142911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:19:10.146277 1142911 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:19:10.146372 1142911 ssh_runner.go:195] Run: crio --version
	I0731 21:19:10.174905 1142911 ssh_runner.go:195] Run: crio --version
	I0731 21:19:10.215239 1142911 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:19:10.216651 1142911 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:19:10.219900 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:10.220274 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:58 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:19:10.220302 1142911 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:19:10.220558 1142911 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:19:10.224916 1142911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:19:10.238125 1142911 kubeadm.go:883] updating cluster {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:19:10.238279 1142911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:19:10.238351 1142911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:19:10.278458 1142911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:19:10.278571 1142911 ssh_runner.go:195] Run: which lz4
	I0731 21:19:10.282698 1142911 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 21:19:10.287035 1142911 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:19:10.287081 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:19:11.835121 1142911 crio.go:462] duration metric: took 1.552473991s to copy over tarball
	I0731 21:19:11.835212 1142911 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:19:14.498464 1142911 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.663212625s)
	I0731 21:19:14.498517 1142911 crio.go:469] duration metric: took 2.663357361s to extract the tarball
	I0731 21:19:14.498529 1142911 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:19:14.543401 1142911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:19:14.587404 1142911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:19:14.587439 1142911 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:19:14.587514 1142911 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:19:14.587534 1142911 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:19:14.587541 1142911 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:19:14.587554 1142911 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:19:14.587520 1142911 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:19:14.587591 1142911 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:19:14.587600 1142911 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:19:14.587525 1142911 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:19:14.589430 1142911 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:19:14.589454 1142911 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:19:14.589463 1142911 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:19:14.589465 1142911 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:19:14.589430 1142911 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:19:14.589464 1142911 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:19:14.589579 1142911 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:19:14.589888 1142911 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:19:14.738327 1142911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:19:14.743292 1142911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:19:14.747163 1142911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:19:14.747531 1142911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:19:14.747848 1142911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:19:14.754623 1142911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:19:14.767139 1142911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:19:14.854529 1142911 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:19:14.854589 1142911 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:19:14.854645 1142911 ssh_runner.go:195] Run: which crictl
	I0731 21:19:14.918390 1142911 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:19:14.918433 1142911 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:19:14.918484 1142911 ssh_runner.go:195] Run: which crictl
	I0731 21:19:14.932997 1142911 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:19:14.933018 1142911 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:19:14.933056 1142911 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:19:14.933056 1142911 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:19:14.933107 1142911 ssh_runner.go:195] Run: which crictl
	I0731 21:19:14.933108 1142911 ssh_runner.go:195] Run: which crictl
	I0731 21:19:14.933025 1142911 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:19:14.933221 1142911 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:19:14.933255 1142911 ssh_runner.go:195] Run: which crictl
	I0731 21:19:14.948899 1142911 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:19:14.948929 1142911 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:19:14.948953 1142911 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:19:14.948964 1142911 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:19:14.948990 1142911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:19:14.949010 1142911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:19:14.949040 1142911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:19:14.948996 1142911 ssh_runner.go:195] Run: which crictl
	I0731 21:19:14.948998 1142911 ssh_runner.go:195] Run: which crictl
	I0731 21:19:14.949080 1142911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:19:14.949134 1142911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:19:15.047704 1142911 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:19:15.047777 1142911 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:19:15.062138 1142911 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:19:15.062169 1142911 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:19:15.067779 1142911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:19:15.067825 1142911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:19:15.067844 1142911 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:19:15.129259 1142911 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:19:15.129327 1142911 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:19:15.193897 1142911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:19:15.338336 1142911 cache_images.go:92] duration metric: took 750.875338ms to LoadCachedImages
	W0731 21:19:15.338453 1142911 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 21:19:15.338474 1142911 kubeadm.go:934] updating node { 192.168.72.107 8443 v1.20.0 crio true true} ...
	I0731 21:19:15.338620 1142911 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:19:15.338716 1142911 ssh_runner.go:195] Run: crio config
	I0731 21:19:15.392486 1142911 cni.go:84] Creating CNI manager for ""
	I0731 21:19:15.392518 1142911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:19:15.392532 1142911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:19:15.392553 1142911 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275462 NodeName:old-k8s-version-275462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:19:15.392692 1142911 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275462"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:19:15.392774 1142911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:19:15.405112 1142911 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:19:15.405216 1142911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:19:15.418666 1142911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 21:19:15.438498 1142911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:19:15.460759 1142911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:19:15.484295 1142911 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0731 21:19:15.491016 1142911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:19:15.508614 1142911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:19:15.648083 1142911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:19:15.666040 1142911 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462 for IP: 192.168.72.107
	I0731 21:19:15.666070 1142911 certs.go:194] generating shared ca certs ...
	I0731 21:19:15.666088 1142911 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:15.666255 1142911 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:19:15.666311 1142911 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:19:15.666322 1142911 certs.go:256] generating profile certs ...
	I0731 21:19:15.666396 1142911 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key
	I0731 21:19:15.666420 1142911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt with IP's: []
	I0731 21:19:15.923202 1142911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt ...
	I0731 21:19:15.923240 1142911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: {Name:mkc8ab71374bebb53ecb61f7cc8792a6e87e7871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:15.923456 1142911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key ...
	I0731 21:19:15.923474 1142911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key: {Name:mk7cae9852d330d0c4c36d2c9aaefe1d2e6dd0c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:15.923585 1142911 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421
	I0731 21:19:15.923608 1142911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt.512f5421 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.107]
	I0731 21:19:16.096268 1142911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt.512f5421 ...
	I0731 21:19:16.096309 1142911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt.512f5421: {Name:mk2b5a715e6d2b8980654afdd10f54cde2ccacc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:16.096511 1142911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421 ...
	I0731 21:19:16.096530 1142911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421: {Name:mk9df86df7dd4a7e9de28d230465a14c66d08de6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:16.096639 1142911 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt.512f5421 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt
	I0731 21:19:16.096734 1142911 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key
	I0731 21:19:16.096814 1142911 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key
	I0731 21:19:16.096838 1142911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt with IP's: []
	I0731 21:19:16.306234 1142911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt ...
	I0731 21:19:16.306273 1142911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt: {Name:mkc66cfae5906eeb256e0dbe41233bae1cb7c33c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:16.306472 1142911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key ...
	I0731 21:19:16.306491 1142911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key: {Name:mkfe34aa5466b9906397cc8c006b8b65473f1eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:16.306716 1142911 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:19:16.306779 1142911 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:19:16.306795 1142911 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:19:16.306828 1142911 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:19:16.306871 1142911 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:19:16.306908 1142911 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:19:16.306967 1142911 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:19:16.307675 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:19:16.334581 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:19:16.361637 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:19:16.388719 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:19:16.416180 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:19:16.444910 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:19:16.469388 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:19:16.496604 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:19:16.528518 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:19:16.564171 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:19:16.593227 1142911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:19:16.617043 1142911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:19:16.635328 1142911 ssh_runner.go:195] Run: openssl version
	I0731 21:19:16.641302 1142911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:19:16.653102 1142911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:19:16.657728 1142911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:19:16.657811 1142911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:19:16.663807 1142911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:19:16.675646 1142911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:19:16.688688 1142911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:19:16.693567 1142911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:19:16.693641 1142911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:19:16.699408 1142911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:19:16.710291 1142911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:19:16.721416 1142911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:16.726186 1142911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:16.726271 1142911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:16.731897 1142911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:19:16.742985 1142911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:19:16.747132 1142911 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 21:19:16.747195 1142911 kubeadm.go:392] StartCluster: {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:19:16.747296 1142911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:19:16.747363 1142911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:19:16.787809 1142911 cri.go:89] found id: ""
	I0731 21:19:16.787889 1142911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:19:16.798009 1142911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:19:16.808064 1142911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:19:16.818193 1142911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:19:16.818215 1142911 kubeadm.go:157] found existing configuration files:
	
	I0731 21:19:16.818260 1142911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:19:16.827804 1142911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:19:16.827877 1142911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:19:16.839004 1142911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:19:16.849735 1142911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:19:16.849802 1142911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:19:16.860768 1142911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:19:16.870006 1142911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:19:16.870067 1142911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:19:16.881022 1142911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:19:16.891685 1142911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:19:16.891749 1142911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:19:16.901743 1142911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:19:17.156672 1142911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:21:15.168433 1142911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:21:15.168564 1142911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:21:15.170438 1142911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:21:15.170555 1142911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:21:15.170673 1142911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:21:15.170796 1142911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:21:15.170932 1142911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:21:15.171030 1142911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:21:15.172728 1142911 out.go:204]   - Generating certificates and keys ...
	I0731 21:21:15.172841 1142911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:21:15.172918 1142911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:21:15.173003 1142911 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 21:21:15.173068 1142911 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 21:21:15.173124 1142911 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 21:21:15.173165 1142911 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 21:21:15.173213 1142911 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 21:21:15.173344 1142911 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-275462] and IPs [192.168.72.107 127.0.0.1 ::1]
	I0731 21:21:15.173429 1142911 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 21:21:15.173614 1142911 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-275462] and IPs [192.168.72.107 127.0.0.1 ::1]
	I0731 21:21:15.173705 1142911 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 21:21:15.173803 1142911 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 21:21:15.173858 1142911 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 21:21:15.173951 1142911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:21:15.174021 1142911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:21:15.174093 1142911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:21:15.174181 1142911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:21:15.174250 1142911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:21:15.174389 1142911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:21:15.174477 1142911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:21:15.174523 1142911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:21:15.174581 1142911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:21:15.175865 1142911 out.go:204]   - Booting up control plane ...
	I0731 21:21:15.175964 1142911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:21:15.176069 1142911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:21:15.176160 1142911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:21:15.176233 1142911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:21:15.176437 1142911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:21:15.176480 1142911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:21:15.176543 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:15.176709 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:21:15.176782 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:15.176953 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:21:15.177060 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:15.177268 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:21:15.177339 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:15.177508 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:21:15.177572 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:15.177759 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:21:15.177766 1142911 kubeadm.go:310] 
	I0731 21:21:15.177799 1142911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:21:15.177836 1142911 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:21:15.177846 1142911 kubeadm.go:310] 
	I0731 21:21:15.177888 1142911 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:21:15.177940 1142911 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:21:15.178047 1142911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:21:15.178056 1142911 kubeadm.go:310] 
	I0731 21:21:15.178139 1142911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:21:15.178179 1142911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:21:15.178208 1142911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:21:15.178214 1142911 kubeadm.go:310] 
	I0731 21:21:15.178305 1142911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:21:15.178385 1142911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:21:15.178406 1142911 kubeadm.go:310] 
	I0731 21:21:15.178495 1142911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:21:15.178575 1142911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:21:15.178644 1142911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:21:15.178708 1142911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:21:15.178787 1142911 kubeadm.go:310] 
	W0731 21:21:15.178854 1142911 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-275462] and IPs [192.168.72.107 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-275462] and IPs [192.168.72.107 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-275462] and IPs [192.168.72.107 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-275462] and IPs [192.168.72.107 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:21:15.178914 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:21:15.636908 1142911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:21:15.651732 1142911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:21:15.661423 1142911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:21:15.661450 1142911 kubeadm.go:157] found existing configuration files:
	
	I0731 21:21:15.661545 1142911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:21:15.670878 1142911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:21:15.670950 1142911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:21:15.680589 1142911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:21:15.689566 1142911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:21:15.689661 1142911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:21:15.699166 1142911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:21:15.708421 1142911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:21:15.708497 1142911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:21:15.717959 1142911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:21:15.726885 1142911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:21:15.726960 1142911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:21:15.736475 1142911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:21:15.803032 1142911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:21:15.803163 1142911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:21:15.941327 1142911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:21:15.941488 1142911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:21:15.941643 1142911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:21:16.116485 1142911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:21:16.118431 1142911 out.go:204]   - Generating certificates and keys ...
	I0731 21:21:16.118546 1142911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:21:16.118638 1142911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:21:16.118735 1142911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:21:16.118792 1142911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:21:16.118866 1142911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:21:16.118920 1142911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:21:16.118981 1142911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:21:16.119084 1142911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:21:16.119201 1142911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:21:16.119313 1142911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:21:16.119379 1142911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:21:16.119450 1142911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:21:16.332996 1142911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:21:16.675185 1142911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:21:16.859507 1142911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:21:17.025545 1142911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:21:17.050395 1142911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:21:17.053598 1142911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:21:17.053738 1142911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:21:17.198547 1142911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:21:17.200425 1142911 out.go:204]   - Booting up control plane ...
	I0731 21:21:17.200594 1142911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:21:17.207589 1142911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:21:17.208962 1142911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:21:17.211511 1142911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:21:17.215924 1142911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:21:57.218570 1142911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:21:57.218777 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:21:57.219013 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:22:02.219765 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:22:02.219944 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:22:12.220686 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:22:12.220965 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:22:32.219747 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:22:32.220040 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:23:12.219553 1142911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:23:12.219779 1142911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:23:12.219792 1142911 kubeadm.go:310] 
	I0731 21:23:12.219863 1142911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:23:12.219937 1142911 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:23:12.219946 1142911 kubeadm.go:310] 
	I0731 21:23:12.219987 1142911 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:23:12.220048 1142911 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:23:12.220178 1142911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:23:12.220195 1142911 kubeadm.go:310] 
	I0731 21:23:12.220338 1142911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:23:12.220388 1142911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:23:12.220435 1142911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:23:12.220445 1142911 kubeadm.go:310] 
	I0731 21:23:12.220607 1142911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:23:12.220737 1142911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:23:12.220752 1142911 kubeadm.go:310] 
	I0731 21:23:12.220891 1142911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:23:12.220991 1142911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:23:12.221112 1142911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:23:12.221220 1142911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:23:12.221231 1142911 kubeadm.go:310] 
	I0731 21:23:12.221766 1142911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:23:12.221873 1142911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:23:12.221959 1142911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:23:12.222045 1142911 kubeadm.go:394] duration metric: took 3m55.474854938s to StartCluster
	I0731 21:23:12.222095 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:23:12.222170 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:23:12.268456 1142911 cri.go:89] found id: ""
	I0731 21:23:12.268488 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.268499 1142911 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:23:12.268507 1142911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:23:12.268575 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:23:12.311331 1142911 cri.go:89] found id: ""
	I0731 21:23:12.311361 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.311370 1142911 logs.go:278] No container was found matching "etcd"
	I0731 21:23:12.311377 1142911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:23:12.311443 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:23:12.348062 1142911 cri.go:89] found id: ""
	I0731 21:23:12.348123 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.348135 1142911 logs.go:278] No container was found matching "coredns"
	I0731 21:23:12.348144 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:23:12.348219 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:23:12.382136 1142911 cri.go:89] found id: ""
	I0731 21:23:12.382171 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.382183 1142911 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:23:12.382192 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:23:12.382278 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:23:12.420889 1142911 cri.go:89] found id: ""
	I0731 21:23:12.420917 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.420929 1142911 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:23:12.420937 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:23:12.421000 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:23:12.462621 1142911 cri.go:89] found id: ""
	I0731 21:23:12.462652 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.462662 1142911 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:23:12.462669 1142911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:23:12.462736 1142911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:23:12.500005 1142911 cri.go:89] found id: ""
	I0731 21:23:12.500040 1142911 logs.go:276] 0 containers: []
	W0731 21:23:12.500052 1142911 logs.go:278] No container was found matching "kindnet"
	I0731 21:23:12.500065 1142911 logs.go:123] Gathering logs for kubelet ...
	I0731 21:23:12.500080 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:23:12.557573 1142911 logs.go:123] Gathering logs for dmesg ...
	I0731 21:23:12.557615 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:23:12.571612 1142911 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:23:12.571649 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:23:12.721857 1142911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:23:12.721888 1142911 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:23:12.721906 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:23:12.823082 1142911 logs.go:123] Gathering logs for container status ...
	I0731 21:23:12.823128 1142911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 21:23:12.864083 1142911 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:23:12.864154 1142911 out.go:239] * 
	* 
	W0731 21:23:12.864226 1142911 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:23:12.864257 1142911 out.go:239] * 
	* 
	W0731 21:23:12.865104 1142911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:23:12.868234 1142911 out.go:177] 
	W0731 21:23:12.869488 1142911 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:23:12.869539 1142911 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:23:12.869560 1142911 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:23:12.871330 1142911 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-275462 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 6 (245.243428ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:23:13.157695 1145893 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-275462" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (296.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-018891 --alsologtostderr -v=3
E0731 21:22:00.018462 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-018891 --alsologtostderr -v=3: exit status 82 (2m0.533450628s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-018891"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:21:10.303620 1144878 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:21:10.303746 1144878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:21:10.303756 1144878 out.go:304] Setting ErrFile to fd 2...
	I0731 21:21:10.303760 1144878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:21:10.303950 1144878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:21:10.304232 1144878 out.go:298] Setting JSON to false
	I0731 21:21:10.304310 1144878 mustload.go:65] Loading cluster: no-preload-018891
	I0731 21:21:10.304704 1144878 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:21:10.304776 1144878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/config.json ...
	I0731 21:21:10.304952 1144878 mustload.go:65] Loading cluster: no-preload-018891
	I0731 21:21:10.305053 1144878 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:21:10.305086 1144878 stop.go:39] StopHost: no-preload-018891
	I0731 21:21:10.305500 1144878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:21:10.305555 1144878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:21:10.321256 1144878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0731 21:21:10.321752 1144878 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:21:10.322345 1144878 main.go:141] libmachine: Using API Version  1
	I0731 21:21:10.322371 1144878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:21:10.322798 1144878 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:21:10.325217 1144878 out.go:177] * Stopping node "no-preload-018891"  ...
	I0731 21:21:10.326498 1144878 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 21:21:10.326549 1144878 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:21:10.326876 1144878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 21:21:10.326913 1144878 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:21:10.330265 1144878 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:21:10.330729 1144878 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:20:09 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:21:10.330762 1144878 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:21:10.330939 1144878 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:21:10.331160 1144878 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:21:10.331331 1144878 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:21:10.331523 1144878 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:21:10.433179 1144878 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 21:21:10.497954 1144878 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 21:21:10.563823 1144878 main.go:141] libmachine: Stopping "no-preload-018891"...
	I0731 21:21:10.563870 1144878 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:21:10.565687 1144878 main.go:141] libmachine: (no-preload-018891) Calling .Stop
	I0731 21:21:10.569495 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 0/120
	I0731 21:21:11.571120 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 1/120
	I0731 21:21:12.572543 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 2/120
	I0731 21:21:13.574889 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 3/120
	I0731 21:21:14.576545 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 4/120
	I0731 21:21:15.578914 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 5/120
	I0731 21:21:16.580610 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 6/120
	I0731 21:21:17.582298 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 7/120
	I0731 21:21:18.584072 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 8/120
	I0731 21:21:19.585931 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 9/120
	I0731 21:21:20.587556 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 10/120
	I0731 21:21:21.588980 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 11/120
	I0731 21:21:22.590657 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 12/120
	I0731 21:21:23.592270 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 13/120
	I0731 21:21:24.593929 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 14/120
	I0731 21:21:25.596133 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 15/120
	I0731 21:21:26.597509 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 16/120
	I0731 21:21:27.598912 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 17/120
	I0731 21:21:28.600327 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 18/120
	I0731 21:21:29.601659 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 19/120
	I0731 21:21:30.604167 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 20/120
	I0731 21:21:31.605612 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 21/120
	I0731 21:21:32.606842 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 22/120
	I0731 21:21:33.608701 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 23/120
	I0731 21:21:34.610756 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 24/120
	I0731 21:21:35.613108 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 25/120
	I0731 21:21:36.614831 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 26/120
	I0731 21:21:37.616159 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 27/120
	I0731 21:21:38.617654 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 28/120
	I0731 21:21:39.619201 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 29/120
	I0731 21:21:40.621621 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 30/120
	I0731 21:21:41.622867 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 31/120
	I0731 21:21:42.624641 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 32/120
	I0731 21:21:43.626681 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 33/120
	I0731 21:21:44.628485 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 34/120
	I0731 21:21:45.630632 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 35/120
	I0731 21:21:46.632441 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 36/120
	I0731 21:21:47.634559 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 37/120
	I0731 21:21:48.636006 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 38/120
	I0731 21:21:49.637358 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 39/120
	I0731 21:21:50.639778 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 40/120
	I0731 21:21:51.641199 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 41/120
	I0731 21:21:52.643612 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 42/120
	I0731 21:21:53.645190 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 43/120
	I0731 21:21:54.647208 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 44/120
	I0731 21:21:55.649191 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 45/120
	I0731 21:21:56.650636 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 46/120
	I0731 21:21:57.651894 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 47/120
	I0731 21:21:58.653422 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 48/120
	I0731 21:21:59.654821 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 49/120
	I0731 21:22:00.656649 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 50/120
	I0731 21:22:01.658252 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 51/120
	I0731 21:22:02.659920 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 52/120
	I0731 21:22:03.661499 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 53/120
	I0731 21:22:04.662971 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 54/120
	I0731 21:22:05.665235 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 55/120
	I0731 21:22:06.666703 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 56/120
	I0731 21:22:07.668176 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 57/120
	I0731 21:22:08.669344 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 58/120
	I0731 21:22:09.670913 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 59/120
	I0731 21:22:10.673338 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 60/120
	I0731 21:22:11.674907 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 61/120
	I0731 21:22:12.676456 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 62/120
	I0731 21:22:13.677708 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 63/120
	I0731 21:22:14.679385 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 64/120
	I0731 21:22:15.681203 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 65/120
	I0731 21:22:16.682634 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 66/120
	I0731 21:22:17.684177 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 67/120
	I0731 21:22:18.685546 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 68/120
	I0731 21:22:19.687024 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 69/120
	I0731 21:22:20.689441 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 70/120
	I0731 21:22:21.691087 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 71/120
	I0731 21:22:22.692471 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 72/120
	I0731 21:22:23.693861 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 73/120
	I0731 21:22:24.695364 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 74/120
	I0731 21:22:25.697770 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 75/120
	I0731 21:22:26.699294 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 76/120
	I0731 21:22:27.700715 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 77/120
	I0731 21:22:28.702388 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 78/120
	I0731 21:22:29.703913 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 79/120
	I0731 21:22:30.705623 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 80/120
	I0731 21:22:31.707079 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 81/120
	I0731 21:22:32.708652 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 82/120
	I0731 21:22:33.710854 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 83/120
	I0731 21:22:34.712260 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 84/120
	I0731 21:22:35.714333 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 85/120
	I0731 21:22:36.715908 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 86/120
	I0731 21:22:37.717339 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 87/120
	I0731 21:22:38.719165 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 88/120
	I0731 21:22:39.720802 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 89/120
	I0731 21:22:40.723243 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 90/120
	I0731 21:22:41.725994 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 91/120
	I0731 21:22:42.727682 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 92/120
	I0731 21:22:43.729147 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 93/120
	I0731 21:22:44.730594 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 94/120
	I0731 21:22:45.732553 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 95/120
	I0731 21:22:46.734277 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 96/120
	I0731 21:22:47.735709 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 97/120
	I0731 21:22:48.737538 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 98/120
	I0731 21:22:49.739954 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 99/120
	I0731 21:22:50.741407 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 100/120
	I0731 21:22:51.743012 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 101/120
	I0731 21:22:52.744680 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 102/120
	I0731 21:22:53.746287 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 103/120
	I0731 21:22:54.747901 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 104/120
	I0731 21:22:55.750217 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 105/120
	I0731 21:22:56.751665 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 106/120
	I0731 21:22:57.753530 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 107/120
	I0731 21:22:58.755133 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 108/120
	I0731 21:22:59.756673 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 109/120
	I0731 21:23:00.759229 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 110/120
	I0731 21:23:01.760874 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 111/120
	I0731 21:23:02.762660 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 112/120
	I0731 21:23:03.764422 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 113/120
	I0731 21:23:04.766400 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 114/120
	I0731 21:23:05.768267 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 115/120
	I0731 21:23:06.769570 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 116/120
	I0731 21:23:07.771108 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 117/120
	I0731 21:23:08.772578 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 118/120
	I0731 21:23:09.774987 1144878 main.go:141] libmachine: (no-preload-018891) Waiting for machine to stop 119/120
	I0731 21:23:10.776098 1144878 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 21:23:10.776185 1144878 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 21:23:10.777592 1144878 out.go:177] 
	W0731 21:23:10.778942 1144878 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 21:23:10.778958 1144878 out.go:239] * 
	* 
	W0731 21:23:10.783211 1144878 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:23:10.784618 1144878 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-018891 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891: exit status 3 (18.558133701s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:23:29.344450 1145845 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.246:22: connect: no route to host
	E0731 21:23:29.344476 1145845 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-018891" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (148.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-563652 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-563652 --alsologtostderr -v=3: exit status 82 (2m9.958226358s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-563652"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:22:23.608592 1145361 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:22:23.608723 1145361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:22:23.608732 1145361 out.go:304] Setting ErrFile to fd 2...
	I0731 21:22:23.608738 1145361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:22:23.608932 1145361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:22:23.609222 1145361 out.go:298] Setting JSON to false
	I0731 21:22:23.609311 1145361 mustload.go:65] Loading cluster: embed-certs-563652
	I0731 21:22:23.609664 1145361 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:22:23.609759 1145361 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/config.json ...
	I0731 21:22:23.609953 1145361 mustload.go:65] Loading cluster: embed-certs-563652
	I0731 21:22:23.610059 1145361 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:22:23.610091 1145361 stop.go:39] StopHost: embed-certs-563652
	I0731 21:22:23.610521 1145361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:22:23.610585 1145361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:22:23.625945 1145361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46647
	I0731 21:22:23.626494 1145361 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:22:23.627096 1145361 main.go:141] libmachine: Using API Version  1
	I0731 21:22:23.627156 1145361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:22:23.627489 1145361 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:22:23.629796 1145361 out.go:177] * Stopping node "embed-certs-563652"  ...
	I0731 21:22:23.630972 1145361 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 21:22:23.631022 1145361 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:22:23.631297 1145361 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 21:22:23.631323 1145361 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:22:23.634276 1145361 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:22:23.634747 1145361 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:21:22 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:22:23.634768 1145361 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:22:23.634976 1145361 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:22:23.635172 1145361 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:22:23.635343 1145361 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:22:23.635502 1145361 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:22:23.743682 1145361 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 21:22:23.807756 1145361 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 21:22:23.874360 1145361 main.go:141] libmachine: Stopping "embed-certs-563652"...
	I0731 21:22:23.874403 1145361 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:22:23.876206 1145361 main.go:141] libmachine: (embed-certs-563652) Calling .Stop
	I0731 21:22:23.879915 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 0/120
	I0731 21:22:24.881825 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 1/120
	I0731 21:22:25.883604 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 2/120
	I0731 21:22:26.885058 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 3/120
	I0731 21:22:27.886503 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 4/120
	I0731 21:22:28.888545 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 5/120
	I0731 21:22:29.890917 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 6/120
	I0731 21:22:30.892609 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 7/120
	I0731 21:22:31.894089 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 8/120
	I0731 21:22:32.895723 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 9/120
	I0731 21:22:33.897998 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 10/120
	I0731 21:22:34.899503 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 11/120
	I0731 21:22:35.901162 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 12/120
	I0731 21:22:36.902853 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 13/120
	I0731 21:22:37.904444 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 14/120
	I0731 21:22:38.906627 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 15/120
	I0731 21:22:39.908392 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 16/120
	I0731 21:22:40.909959 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 17/120
	I0731 21:22:41.911612 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 18/120
	I0731 21:22:42.913520 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 19/120
	I0731 21:22:43.916076 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 20/120
	I0731 21:22:44.918134 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 21/120
	I0731 21:22:45.919830 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 22/120
	I0731 21:22:46.921206 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 23/120
	I0731 21:22:47.922815 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 24/120
	I0731 21:22:48.924655 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 25/120
	I0731 21:22:49.926888 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 26/120
	I0731 21:22:50.928432 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 27/120
	I0731 21:22:51.930164 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 28/120
	I0731 21:22:52.931641 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 29/120
	I0731 21:22:53.933943 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 30/120
	I0731 21:22:54.935536 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 31/120
	I0731 21:22:55.937211 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 32/120
	I0731 21:22:56.938801 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 33/120
	I0731 21:22:57.940546 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 34/120
	I0731 21:22:58.942722 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 35/120
	I0731 21:22:59.944777 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 36/120
	I0731 21:23:00.946489 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 37/120
	I0731 21:23:01.948057 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 38/120
	I0731 21:23:02.949585 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 39/120
	I0731 21:23:03.952008 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 40/120
	I0731 21:23:04.953565 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 41/120
	I0731 21:23:05.954903 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 42/120
	I0731 21:23:06.957167 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 43/120
	I0731 21:23:07.958589 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 44/120
	I0731 21:23:08.960561 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 45/120
	I0731 21:23:09.962668 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 46/120
	I0731 21:23:10.964154 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 47/120
	I0731 21:23:11.965853 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 48/120
	I0731 21:23:12.968188 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 49/120
	I0731 21:23:13.970555 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 50/120
	I0731 21:23:14.972183 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 51/120
	I0731 21:23:15.973970 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 52/120
	I0731 21:23:16.975657 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 53/120
	I0731 21:23:17.977486 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 54/120
	I0731 21:23:18.979257 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 55/120
	I0731 21:23:19.980718 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 56/120
	I0731 21:23:20.982751 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 57/120
	I0731 21:23:21.984028 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 58/120
	I0731 21:23:22.985705 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 59/120
	I0731 21:23:23.988144 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 60/120
	I0731 21:23:34.409872 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 61/120
	I0731 21:23:35.411739 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 62/120
	I0731 21:23:36.413405 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 63/120
	I0731 21:23:37.415007 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 64/120
	I0731 21:23:38.416595 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 65/120
	I0731 21:23:39.418686 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 66/120
	I0731 21:23:40.420217 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 67/120
	I0731 21:23:41.421788 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 68/120
	I0731 21:23:42.423223 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 69/120
	I0731 21:23:43.424890 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 70/120
	I0731 21:23:44.426908 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 71/120
	I0731 21:23:45.428413 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 72/120
	I0731 21:23:46.429995 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 73/120
	I0731 21:23:47.431536 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 74/120
	I0731 21:23:48.433141 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 75/120
	I0731 21:23:49.435319 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 76/120
	I0731 21:23:50.436855 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 77/120
	I0731 21:23:51.438376 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 78/120
	I0731 21:23:52.440759 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 79/120
	I0731 21:23:53.442230 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 80/120
	I0731 21:23:54.443744 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 81/120
	I0731 21:23:55.445274 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 82/120
	I0731 21:23:56.446933 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 83/120
	I0731 21:23:57.448645 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 84/120
	I0731 21:23:58.450130 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 85/120
	I0731 21:23:59.452312 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 86/120
	I0731 21:24:00.453979 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 87/120
	I0731 21:24:01.455406 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 88/120
	I0731 21:24:02.456739 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 89/120
	I0731 21:24:03.458716 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 90/120
	I0731 21:24:04.461112 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 91/120
	I0731 21:24:05.462645 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 92/120
	I0731 21:24:06.464345 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 93/120
	I0731 21:24:07.465746 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 94/120
	I0731 21:24:08.467313 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 95/120
	I0731 21:24:09.468824 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 96/120
	I0731 21:24:10.470197 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 97/120
	I0731 21:24:11.471791 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 98/120
	I0731 21:24:12.473686 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 99/120
	I0731 21:24:13.475426 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 100/120
	I0731 21:24:14.477179 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 101/120
	I0731 21:24:15.478780 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 102/120
	I0731 21:24:16.480271 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 103/120
	I0731 21:24:17.481696 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 104/120
	I0731 21:24:18.483276 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 105/120
	I0731 21:24:19.485678 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 106/120
	I0731 21:24:20.487603 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 107/120
	I0731 21:24:21.489285 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 108/120
	I0731 21:24:22.491084 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 109/120
	I0731 21:24:23.492860 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 110/120
	I0731 21:24:24.495228 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 111/120
	I0731 21:24:25.496931 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 112/120
	I0731 21:24:26.498445 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 113/120
	I0731 21:24:27.500069 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 114/120
	I0731 21:24:28.501692 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 115/120
	I0731 21:24:29.503993 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 116/120
	I0731 21:24:30.505628 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 117/120
	I0731 21:24:31.507250 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 118/120
	I0731 21:24:32.508780 1145361 main.go:141] libmachine: (embed-certs-563652) Waiting for machine to stop 119/120
	I0731 21:24:33.509796 1145361 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 21:24:33.509873 1145361 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 21:24:33.511618 1145361 out.go:177] 
	W0731 21:24:33.512910 1145361 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 21:24:33.512937 1145361 out.go:239] * 
	* 
	W0731 21:24:33.517323 1145361 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:24:33.518662 1145361 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-563652 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652: exit status 3 (18.511927006s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:24:52.032495 1146940 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.203:22: connect: no route to host
	E0731 21:24:52.032517 1146940 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.203:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-563652" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (148.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-275462 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-275462 create -f testdata/busybox.yaml: exit status 1 (59.503362ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-275462" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-275462 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 6 (243.957731ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:23:13.459741 1145933 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-275462" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 6 (236.943442ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:23:13.700366 1145963 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-275462" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (114.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m54.351438589s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-275462 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-275462 describe deploy/metrics-server -n kube-system: exit status 1 (46.639534ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-275462" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-275462 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 6 (227.779828ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:25:08.325281 1147292 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-275462" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (114.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (17.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891: exit status 3 (8.160282349s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:23:37.504630 1146287 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.246:22: connect: no route to host
	E0731 21:23:37.504655 1146287 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.246:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-018891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-018891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15273834s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.246:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-018891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891: exit status 3 (3.062652768s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:23:46.720494 1146589 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.246:22: connect: no route to host
	E0731 21:23:46.720524 1146589 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-018891" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (17.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652: exit status 3 (3.168352079s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:24:55.200519 1147037 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.203:22: connect: no route to host
	E0731 21:24:55.200540 1147037 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.203:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-563652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-563652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154432716s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-563652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652: exit status 3 (3.061309278s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:25:04.416595 1147186 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.203:22: connect: no route to host
	E0731 21:25:04.416622 1147186 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.203:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-563652" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-755535 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-755535 --alsologtostderr -v=3: exit status 82 (2m0.509225167s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-755535"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:24:54.553669 1147134 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:24:54.553803 1147134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:24:54.553813 1147134 out.go:304] Setting ErrFile to fd 2...
	I0731 21:24:54.553817 1147134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:24:54.554030 1147134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:24:54.554267 1147134 out.go:298] Setting JSON to false
	I0731 21:24:54.554346 1147134 mustload.go:65] Loading cluster: default-k8s-diff-port-755535
	I0731 21:24:54.554724 1147134 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:24:54.554797 1147134 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:24:54.554968 1147134 mustload.go:65] Loading cluster: default-k8s-diff-port-755535
	I0731 21:24:54.555069 1147134 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:24:54.555113 1147134 stop.go:39] StopHost: default-k8s-diff-port-755535
	I0731 21:24:54.555467 1147134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:24:54.555518 1147134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:24:54.571118 1147134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0731 21:24:54.571708 1147134 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:24:54.572341 1147134 main.go:141] libmachine: Using API Version  1
	I0731 21:24:54.572369 1147134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:24:54.572686 1147134 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:24:54.575168 1147134 out.go:177] * Stopping node "default-k8s-diff-port-755535"  ...
	I0731 21:24:54.576430 1147134 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 21:24:54.576479 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:24:54.576747 1147134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 21:24:54.576779 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:24:54.579459 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:24:54.579846 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:23:49 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:24:54.579874 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:24:54.580052 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:24:54.580224 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:24:54.580391 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:24:54.580665 1147134 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:24:54.680884 1147134 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 21:24:54.742360 1147134 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 21:24:54.791176 1147134 main.go:141] libmachine: Stopping "default-k8s-diff-port-755535"...
	I0731 21:24:54.791217 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:24:54.792657 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Stop
	I0731 21:24:54.797037 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 0/120
	I0731 21:24:55.798489 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 1/120
	I0731 21:24:56.800017 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 2/120
	I0731 21:24:57.801444 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 3/120
	I0731 21:24:58.802871 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 4/120
	I0731 21:24:59.805137 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 5/120
	I0731 21:25:00.806751 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 6/120
	I0731 21:25:01.808127 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 7/120
	I0731 21:25:02.809693 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 8/120
	I0731 21:25:03.811076 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 9/120
	I0731 21:25:04.812770 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 10/120
	I0731 21:25:05.814402 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 11/120
	I0731 21:25:06.815929 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 12/120
	I0731 21:25:07.817588 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 13/120
	I0731 21:25:08.819079 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 14/120
	I0731 21:25:09.821024 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 15/120
	I0731 21:25:10.822576 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 16/120
	I0731 21:25:11.823979 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 17/120
	I0731 21:25:12.825727 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 18/120
	I0731 21:25:13.827299 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 19/120
	I0731 21:25:14.829839 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 20/120
	I0731 21:25:15.831369 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 21/120
	I0731 21:25:16.833375 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 22/120
	I0731 21:25:17.835277 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 23/120
	I0731 21:25:18.836822 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 24/120
	I0731 21:25:19.839227 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 25/120
	I0731 21:25:20.841185 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 26/120
	I0731 21:25:21.842597 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 27/120
	I0731 21:25:22.844196 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 28/120
	I0731 21:25:23.845605 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 29/120
	I0731 21:25:24.847053 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 30/120
	I0731 21:25:25.848697 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 31/120
	I0731 21:25:26.850459 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 32/120
	I0731 21:25:27.851832 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 33/120
	I0731 21:25:28.853445 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 34/120
	I0731 21:25:29.855660 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 35/120
	I0731 21:25:30.857161 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 36/120
	I0731 21:25:31.858394 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 37/120
	I0731 21:25:32.859993 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 38/120
	I0731 21:25:33.861346 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 39/120
	I0731 21:25:34.862797 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 40/120
	I0731 21:25:35.864499 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 41/120
	I0731 21:25:36.865962 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 42/120
	I0731 21:25:37.867483 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 43/120
	I0731 21:25:38.869077 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 44/120
	I0731 21:25:39.871417 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 45/120
	I0731 21:25:40.873104 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 46/120
	I0731 21:25:41.874755 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 47/120
	I0731 21:25:42.876459 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 48/120
	I0731 21:25:43.878124 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 49/120
	I0731 21:25:44.880191 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 50/120
	I0731 21:25:45.881681 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 51/120
	I0731 21:25:46.883318 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 52/120
	I0731 21:25:47.884927 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 53/120
	I0731 21:25:48.886807 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 54/120
	I0731 21:25:49.889285 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 55/120
	I0731 21:25:50.890829 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 56/120
	I0731 21:25:51.892298 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 57/120
	I0731 21:25:52.893915 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 58/120
	I0731 21:25:53.895485 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 59/120
	I0731 21:25:54.897251 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 60/120
	I0731 21:25:55.898841 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 61/120
	I0731 21:25:56.900634 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 62/120
	I0731 21:25:57.902416 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 63/120
	I0731 21:25:58.903956 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 64/120
	I0731 21:25:59.906304 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 65/120
	I0731 21:26:00.908130 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 66/120
	I0731 21:26:01.909886 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 67/120
	I0731 21:26:02.911487 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 68/120
	I0731 21:26:03.912982 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 69/120
	I0731 21:26:04.915096 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 70/120
	I0731 21:26:05.916953 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 71/120
	I0731 21:26:06.918653 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 72/120
	I0731 21:26:07.920517 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 73/120
	I0731 21:26:08.922238 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 74/120
	I0731 21:26:09.924558 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 75/120
	I0731 21:26:10.926626 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 76/120
	I0731 21:26:11.928332 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 77/120
	I0731 21:26:12.929791 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 78/120
	I0731 21:26:13.931553 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 79/120
	I0731 21:26:14.933934 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 80/120
	I0731 21:26:15.935913 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 81/120
	I0731 21:26:16.937403 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 82/120
	I0731 21:26:17.939152 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 83/120
	I0731 21:26:18.940935 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 84/120
	I0731 21:26:19.943674 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 85/120
	I0731 21:26:20.945330 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 86/120
	I0731 21:26:21.946906 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 87/120
	I0731 21:26:22.948194 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 88/120
	I0731 21:26:23.949760 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 89/120
	I0731 21:26:24.952205 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 90/120
	I0731 21:26:25.953994 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 91/120
	I0731 21:26:26.955398 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 92/120
	I0731 21:26:27.956916 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 93/120
	I0731 21:26:28.958589 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 94/120
	I0731 21:26:29.960957 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 95/120
	I0731 21:26:30.962212 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 96/120
	I0731 21:26:31.963831 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 97/120
	I0731 21:26:32.965628 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 98/120
	I0731 21:26:33.967276 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 99/120
	I0731 21:26:34.969326 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 100/120
	I0731 21:26:35.970952 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 101/120
	I0731 21:26:36.972505 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 102/120
	I0731 21:26:37.974095 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 103/120
	I0731 21:26:38.975515 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 104/120
	I0731 21:26:39.977618 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 105/120
	I0731 21:26:40.979542 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 106/120
	I0731 21:26:41.981076 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 107/120
	I0731 21:26:42.982952 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 108/120
	I0731 21:26:43.984770 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 109/120
	I0731 21:26:44.987152 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 110/120
	I0731 21:26:45.988515 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 111/120
	I0731 21:26:46.990229 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 112/120
	I0731 21:26:47.992193 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 113/120
	I0731 21:26:48.993937 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 114/120
	I0731 21:26:49.996484 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 115/120
	I0731 21:26:50.998440 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 116/120
	I0731 21:26:52.000081 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 117/120
	I0731 21:26:53.001650 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 118/120
	I0731 21:26:54.003438 1147134 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for machine to stop 119/120
	I0731 21:26:55.005061 1147134 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 21:26:55.005140 1147134 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 21:26:55.007511 1147134 out.go:177] 
	W0731 21:26:55.009237 1147134 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 21:26:55.009265 1147134 out.go:239] * 
	* 
	W0731 21:26:55.013420 1147134 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:26:55.014769 1147134 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-755535 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
E0731 21:27:00.019395 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535: exit status 3 (18.583360262s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:27:13.600514 1147809 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host
	E0731 21:27:13.600544 1147809 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-755535" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (736.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-275462 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-275462 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m12.889309149s)

                                                
                                                
-- stdout --
	* [old-k8s-version-275462] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-275462" primary control-plane node in "old-k8s-version-275462" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-275462" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:25:14.856972 1147424 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:25:14.857087 1147424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:25:14.857098 1147424 out.go:304] Setting ErrFile to fd 2...
	I0731 21:25:14.857104 1147424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:25:14.857844 1147424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:25:14.858753 1147424 out.go:298] Setting JSON to false
	I0731 21:25:14.859879 1147424 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18466,"bootTime":1722442649,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:25:14.859947 1147424 start.go:139] virtualization: kvm guest
	I0731 21:25:14.861813 1147424 out.go:177] * [old-k8s-version-275462] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:25:14.864079 1147424 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:25:14.864075 1147424 notify.go:220] Checking for updates...
	I0731 21:25:14.866587 1147424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:25:14.867881 1147424 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:25:14.869102 1147424 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:25:14.870305 1147424 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:25:14.871413 1147424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:25:14.872871 1147424 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:25:14.873263 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:25:14.873311 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:25:14.889303 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39495
	I0731 21:25:14.889881 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:25:14.890447 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:25:14.890472 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:25:14.890974 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:25:14.891143 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:25:14.892822 1147424 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 21:25:14.894382 1147424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:25:14.894707 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:25:14.894756 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:25:14.910096 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0731 21:25:14.910627 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:25:14.911127 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:25:14.911166 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:25:14.911462 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:25:14.911697 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:25:14.949911 1147424 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:25:14.951322 1147424 start.go:297] selected driver: kvm2
	I0731 21:25:14.951340 1147424 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:25:14.951475 1147424 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:25:14.952319 1147424 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:25:14.952415 1147424 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:25:14.968759 1147424 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:25:14.969197 1147424 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:25:14.969250 1147424 cni.go:84] Creating CNI manager for ""
	I0731 21:25:14.969260 1147424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:25:14.969312 1147424 start.go:340] cluster config:
	{Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:25:14.969423 1147424 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:25:14.972269 1147424 out.go:177] * Starting "old-k8s-version-275462" primary control-plane node in "old-k8s-version-275462" cluster
	I0731 21:25:14.973648 1147424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:25:14.973705 1147424 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:25:14.973713 1147424 cache.go:56] Caching tarball of preloaded images
	I0731 21:25:14.973812 1147424 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:25:14.973857 1147424 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 21:25:14.973968 1147424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:25:14.974169 1147424 start.go:360] acquireMachinesLock for old-k8s-version-275462: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:28:57.157052 1147424 start.go:364] duration metric: took 3m42.182815583s to acquireMachinesLock for "old-k8s-version-275462"
	I0731 21:28:57.157149 1147424 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:57.157159 1147424 fix.go:54] fixHost starting: 
	I0731 21:28:57.157580 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:57.157635 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:57.177971 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 21:28:57.178444 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:57.179070 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:28:57.179105 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:57.179414 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:57.179640 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:28:57.179803 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetState
	I0731 21:28:57.181518 1147424 fix.go:112] recreateIfNeeded on old-k8s-version-275462: state=Stopped err=<nil>
	I0731 21:28:57.181566 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	W0731 21:28:57.181776 1147424 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:57.184336 1147424 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275462" ...
	I0731 21:28:57.185854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .Start
	I0731 21:28:57.186093 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring networks are active...
	I0731 21:28:57.186915 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network default is active
	I0731 21:28:57.187268 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network mk-old-k8s-version-275462 is active
	I0731 21:28:57.187627 1147424 main.go:141] libmachine: (old-k8s-version-275462) Getting domain xml...
	I0731 21:28:57.188447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:28:58.502711 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting to get IP...
	I0731 21:28:58.503791 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.504272 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.504341 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.504250 1148436 retry.go:31] will retry after 309.193175ms: waiting for machine to come up
	I0731 21:28:58.815172 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.815690 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.815745 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.815657 1148436 retry.go:31] will retry after 271.329404ms: waiting for machine to come up
	I0731 21:28:59.089281 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.089738 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.089778 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.089705 1148436 retry.go:31] will retry after 354.250517ms: waiting for machine to come up
	I0731 21:28:59.445390 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.445869 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.445895 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.445823 1148436 retry.go:31] will retry after 434.740787ms: waiting for machine to come up
	I0731 21:28:59.882326 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.882926 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.882959 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.882880 1148436 retry.go:31] will retry after 563.345278ms: waiting for machine to come up
	I0731 21:29:00.447702 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:00.448213 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:00.448245 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:00.448155 1148436 retry.go:31] will retry after 605.062991ms: waiting for machine to come up
	I0731 21:29:01.055120 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.055541 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.055564 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.055484 1148436 retry.go:31] will retry after 781.785142ms: waiting for machine to come up
	I0731 21:29:01.838536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.839123 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.839148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.839075 1148436 retry.go:31] will retry after 1.037287171s: waiting for machine to come up
	I0731 21:29:02.878421 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:02.878828 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:02.878860 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:02.878794 1148436 retry.go:31] will retry after 1.796829213s: waiting for machine to come up
	I0731 21:29:04.677338 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:04.677928 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:04.677963 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:04.677848 1148436 retry.go:31] will retry after 2.083632912s: waiting for machine to come up
	I0731 21:29:06.764436 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:06.764979 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:06.765012 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:06.764918 1148436 retry.go:31] will retry after 2.092811182s: waiting for machine to come up
	I0731 21:29:08.860056 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:08.860536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:08.860571 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:08.860498 1148436 retry.go:31] will retry after 2.731015709s: waiting for machine to come up
	I0731 21:29:11.594836 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:11.595339 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:11.595374 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:11.595293 1148436 retry.go:31] will retry after 4.520307648s: waiting for machine to come up
	I0731 21:29:16.120431 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120937 1147424 main.go:141] libmachine: (old-k8s-version-275462) Found IP for machine: 192.168.72.107
	I0731 21:29:16.120961 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has current primary IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120968 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserving static IP address...
	I0731 21:29:16.121466 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.121508 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserved static IP address: 192.168.72.107
	I0731 21:29:16.121528 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | skip adding static IP to network mk-old-k8s-version-275462 - found existing host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"}
	I0731 21:29:16.121561 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Getting to WaitForSSH function...
	I0731 21:29:16.121599 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting for SSH to be available...
	I0731 21:29:16.123460 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123825 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.123849 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123954 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH client type: external
	I0731 21:29:16.123988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa (-rw-------)
	I0731 21:29:16.124019 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:16.124034 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | About to run SSH command:
	I0731 21:29:16.124049 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | exit 0
	I0731 21:29:16.244331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:16.244741 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:29:16.245387 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.248072 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248502 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.248529 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248857 1147424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:29:16.249132 1147424 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:16.249162 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:16.249412 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.252283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252657 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.252687 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.253096 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253286 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253433 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.253606 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.253875 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.253895 1147424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:16.356702 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:16.356743 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357088 1147424 buildroot.go:166] provisioning hostname "old-k8s-version-275462"
	I0731 21:29:16.357116 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357303 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.361044 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361504 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.361540 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361801 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.362037 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362252 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362430 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.362618 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.362866 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.362884 1147424 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275462 && echo "old-k8s-version-275462" | sudo tee /etc/hostname
	I0731 21:29:16.478590 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275462
	
	I0731 21:29:16.478635 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.481767 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.482184 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482467 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.482716 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.482888 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.483083 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.483323 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.483529 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.483554 1147424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275462/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:16.597465 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:16.597515 1147424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:16.597549 1147424 buildroot.go:174] setting up certificates
	I0731 21:29:16.597563 1147424 provision.go:84] configureAuth start
	I0731 21:29:16.597578 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.597901 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.600943 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601347 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.601388 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601582 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.604296 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604757 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.604787 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604950 1147424 provision.go:143] copyHostCerts
	I0731 21:29:16.605019 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:16.605037 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:16.605108 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:16.605235 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:16.605249 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:16.605285 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:16.605370 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:16.605381 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:16.605407 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:16.605474 1147424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275462 san=[127.0.0.1 192.168.72.107 localhost minikube old-k8s-version-275462]
	I0731 21:29:16.959571 1147424 provision.go:177] copyRemoteCerts
	I0731 21:29:16.959637 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:16.959671 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.962543 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.962955 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.962988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.963253 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.963483 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.963690 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.963885 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.047050 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:17.072833 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 21:29:17.099214 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:17.125846 1147424 provision.go:87] duration metric: took 528.260173ms to configureAuth
	I0731 21:29:17.125892 1147424 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:17.126109 1147424 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:29:17.126194 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.129283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129568 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.129602 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129926 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.130232 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130458 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130601 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.130820 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.131002 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.131016 1147424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:17.395537 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:17.395569 1147424 machine.go:97] duration metric: took 1.146418308s to provisionDockerMachine
	I0731 21:29:17.395581 1147424 start.go:293] postStartSetup for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:29:17.395598 1147424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:17.395639 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.395987 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:17.396024 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.398916 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399233 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.399264 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.399674 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.399854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.400026 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.483331 1147424 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:17.487820 1147424 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:17.487856 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:17.487925 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:17.488012 1147424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:17.488186 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:17.499484 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:17.525699 1147424 start.go:296] duration metric: took 130.099417ms for postStartSetup
	I0731 21:29:17.525756 1147424 fix.go:56] duration metric: took 20.368597161s for fixHost
	I0731 21:29:17.525785 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.529040 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529525 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.529570 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.530095 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530310 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530481 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.530704 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.530879 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.530890 1147424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 21:29:17.632991 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461357.608223429
	
	I0731 21:29:17.633011 1147424 fix.go:216] guest clock: 1722461357.608223429
	I0731 21:29:17.633018 1147424 fix.go:229] Guest: 2024-07-31 21:29:17.608223429 +0000 UTC Remote: 2024-07-31 21:29:17.525761122 +0000 UTC m=+242.704537445 (delta=82.462307ms)
	I0731 21:29:17.633040 1147424 fix.go:200] guest clock delta is within tolerance: 82.462307ms
	I0731 21:29:17.633045 1147424 start.go:83] releasing machines lock for "old-k8s-version-275462", held for 20.475925282s
	I0731 21:29:17.633069 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.633360 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:17.636188 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636565 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.636598 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636792 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637346 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637569 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637674 1147424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:17.637721 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.637831 1147424 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:17.637861 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.640574 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640772 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640966 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.640996 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641174 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641297 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.641331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641371 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641511 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641564 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641680 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641846 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641886 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.642184 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.716822 1147424 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:17.741404 1147424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:17.892700 1147424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:17.899143 1147424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:17.899252 1147424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:17.915997 1147424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:17.916032 1147424 start.go:495] detecting cgroup driver to use...
	I0731 21:29:17.916133 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:17.933847 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:17.948471 1147424 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:17.948565 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:17.963294 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:17.978417 1147424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:18.100521 1147424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:18.243022 1147424 docker.go:233] disabling docker service ...
	I0731 21:29:18.243104 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:18.258762 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:18.272012 1147424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:18.421137 1147424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:18.564600 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:18.581019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:18.601426 1147424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:29:18.601504 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.617312 1147424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:18.617400 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.631697 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.642487 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.654548 1147424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:18.666338 1147424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:18.676326 1147424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:18.676406 1147424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:18.690225 1147424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:18.702315 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:18.836795 1147424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:18.977840 1147424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:18.977930 1147424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:18.984979 1147424 start.go:563] Will wait 60s for crictl version
	I0731 21:29:18.985059 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:18.989654 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:19.033602 1147424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:19.033701 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.061583 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.093228 1147424 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:29:19.094804 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:19.098122 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.098620 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:19.098648 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.099016 1147424 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:19.103372 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:19.117035 1147424 kubeadm.go:883] updating cluster {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:19.117205 1147424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:29:19.117275 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:19.163252 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:19.163343 1147424 ssh_runner.go:195] Run: which lz4
	I0731 21:29:19.168173 1147424 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 21:29:19.172513 1147424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:19.172576 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:29:20.731858 1147424 crio.go:462] duration metric: took 1.563734165s to copy over tarball
	I0731 21:29:20.732033 1147424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:23.813579 1147424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081445019s)
	I0731 21:29:23.813629 1147424 crio.go:469] duration metric: took 3.081657576s to extract the tarball
	I0731 21:29:23.813640 1147424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:23.855937 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:23.892640 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:23.892676 1147424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:23.892772 1147424 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.892797 1147424 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.892852 1147424 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.892776 1147424 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.893142 1147424 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.893240 1147424 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:29:23.893343 1147424 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.893348 1147424 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.894880 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.895111 1147424 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.894968 1147424 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.895194 1147424 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.895489 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.895587 1147424 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:29:24.036855 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.039761 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.042658 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.045088 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.045098 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.048688 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.088535 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:29:24.218808 1147424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:29:24.218845 1147424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:29:24.218881 1147424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:29:24.218918 1147424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.218930 1147424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:29:24.218936 1147424 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:29:24.218943 1147424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.218965 1147424 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.218978 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.219058 1147424 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:29:24.219078 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219079 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219084 1147424 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.219135 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238540 1147424 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:29:24.238602 1147424 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:29:24.238653 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238678 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.238697 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.238736 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.238794 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.238802 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.238851 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.366795 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:29:24.371307 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:29:24.371394 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:29:24.371436 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:29:24.371516 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:29:24.380026 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:29:24.380043 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:29:24.412112 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:29:24.523420 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:24.671943 1147424 cache_images.go:92] duration metric: took 779.240281ms to LoadCachedImages
	W0731 21:29:24.672078 1147424 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0731 21:29:24.672114 1147424 kubeadm.go:934] updating node { 192.168.72.107 8443 v1.20.0 crio true true} ...
	I0731 21:29:24.672267 1147424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:24.672897 1147424 ssh_runner.go:195] Run: crio config
	I0731 21:29:24.722662 1147424 cni.go:84] Creating CNI manager for ""
	I0731 21:29:24.722686 1147424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:24.722696 1147424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:24.722717 1147424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275462 NodeName:old-k8s-version-275462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:29:24.722892 1147424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275462"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:24.722962 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:29:24.733178 1147424 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:24.733273 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:24.743515 1147424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 21:29:24.760826 1147424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:24.779805 1147424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:29:24.798560 1147424 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:24.802406 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:24.815015 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:24.937628 1147424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:24.956917 1147424 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462 for IP: 192.168.72.107
	I0731 21:29:24.956949 1147424 certs.go:194] generating shared ca certs ...
	I0731 21:29:24.956972 1147424 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:24.957180 1147424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:24.957243 1147424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:24.957258 1147424 certs.go:256] generating profile certs ...
	I0731 21:29:24.957385 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key
	I0731 21:29:24.957468 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421
	I0731 21:29:24.957520 1147424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key
	I0731 21:29:24.957676 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:24.957719 1147424 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:24.957734 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:24.957770 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:24.957805 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:24.957837 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:24.957898 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:24.958772 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:24.998159 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:25.057520 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:25.098374 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:25.140601 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:29:25.187540 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:25.213821 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:25.240997 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:25.266970 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:25.292340 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:25.318838 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:25.344071 1147424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:25.361756 1147424 ssh_runner.go:195] Run: openssl version
	I0731 21:29:25.368009 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:25.379741 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.384975 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.385052 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.390894 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:25.403007 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:25.415067 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422223 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422310 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.429842 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:25.440874 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:25.451684 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456190 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456259 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.462311 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:25.474253 1147424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:25.479088 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:25.485188 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:25.491404 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:25.498223 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:25.504935 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:25.511202 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:25.517628 1147424 kubeadm.go:392] StartCluster: {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:25.517767 1147424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:25.517832 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.555145 1147424 cri.go:89] found id: ""
	I0731 21:29:25.555227 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:25.565732 1147424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:25.565758 1147424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:25.565821 1147424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:25.575700 1147424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:25.576730 1147424 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:25.577437 1147424 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-275462" cluster setting kubeconfig missing "old-k8s-version-275462" context setting]
	I0731 21:29:25.578357 1147424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:25.626975 1147424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:25.637707 1147424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.107
	I0731 21:29:25.637758 1147424 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:25.637773 1147424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:25.637826 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.674153 1147424 cri.go:89] found id: ""
	I0731 21:29:25.674240 1147424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:25.692354 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:25.703047 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:25.703081 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:25.703140 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:25.712766 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:25.712884 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:25.723121 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:25.732767 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:25.732846 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:25.743055 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.752622 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:25.752699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.763763 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:25.773620 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:25.773699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:25.784175 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:25.794182 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:25.908515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.676104 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.891081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.024837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.100397 1147424 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:27.100499 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.600582 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.101391 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.601068 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.101502 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.600838 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:30.101071 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:30.601377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.100907 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.600736 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.100741 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.601406 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.100616 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.601476 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.101619 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.601270 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:35.101055 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:35.600782 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.101344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.600794 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.101402 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.601198 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.100947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.601332 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.101351 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.601319 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:40.101530 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:40.601303 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.100720 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.600723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.100890 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.100765 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.101217 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:45.100963 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:45.601355 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.101354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.601416 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.100953 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.601551 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.100775 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.601528 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.101362 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.601101 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:50.101380 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:50.601347 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.101325 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.601381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.101364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.600852 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.101284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.601020 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.101330 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.601310 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:55.101321 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:55.600950 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.100785 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.601322 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.101431 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.101425 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.600958 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.100876 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.601349 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:00.101336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:00.601036 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.101381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.601371 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.100649 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.601354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.101316 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.101099 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.601146 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:05.100624 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:05.600680 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.101286 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.601308 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.100801 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.600703 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.101252 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.601341 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.101049 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.601284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:10.100825 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:10.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.101377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.601357 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.100679 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.600724 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.101278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.600992 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.101359 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.601364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.101218 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.600733 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.101137 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.601585 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.101343 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.101295 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.601307 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.100682 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.601155 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:20.100856 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:20.601336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.101059 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.100791 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.601360 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.600731 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.601285 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:25.101043 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:25.601045 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.101312 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.600559 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:27.100884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:27.100987 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:27.138104 1147424 cri.go:89] found id: ""
	I0731 21:30:27.138142 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.138154 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:27.138163 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:27.138233 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:27.175030 1147424 cri.go:89] found id: ""
	I0731 21:30:27.175068 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.175080 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:27.175088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:27.175158 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:27.209891 1147424 cri.go:89] found id: ""
	I0731 21:30:27.209925 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.209934 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:27.209941 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:27.209992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:27.247117 1147424 cri.go:89] found id: ""
	I0731 21:30:27.247154 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.247163 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:27.247170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:27.247236 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:27.286595 1147424 cri.go:89] found id: ""
	I0731 21:30:27.286625 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.286633 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:27.286639 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:27.286695 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:27.321169 1147424 cri.go:89] found id: ""
	I0731 21:30:27.321201 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.321218 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:27.321226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:27.321310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:27.356278 1147424 cri.go:89] found id: ""
	I0731 21:30:27.356306 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.356317 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:27.356323 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:27.356386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:27.390351 1147424 cri.go:89] found id: ""
	I0731 21:30:27.390378 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.390387 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:27.390398 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:27.390412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:27.440412 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:27.440451 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:27.454295 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:27.454330 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:27.575971 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:27.575999 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:27.576018 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:27.639090 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:27.639141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:30.177467 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:30.191103 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:30.191179 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:30.226529 1147424 cri.go:89] found id: ""
	I0731 21:30:30.226575 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.226584 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:30.226591 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:30.226653 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:30.262162 1147424 cri.go:89] found id: ""
	I0731 21:30:30.262193 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.262202 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:30.262209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:30.262275 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:30.301663 1147424 cri.go:89] found id: ""
	I0731 21:30:30.301698 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.301706 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:30.301713 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:30.301769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:30.342073 1147424 cri.go:89] found id: ""
	I0731 21:30:30.342105 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.342117 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:30.342125 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:30.342199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:30.375980 1147424 cri.go:89] found id: ""
	I0731 21:30:30.376013 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.376024 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:30.376033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:30.376114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:30.409852 1147424 cri.go:89] found id: ""
	I0731 21:30:30.409892 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.409900 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:30.409907 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:30.409960 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:30.444551 1147424 cri.go:89] found id: ""
	I0731 21:30:30.444592 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.444604 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:30.444612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:30.444672 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:30.481953 1147424 cri.go:89] found id: ""
	I0731 21:30:30.481987 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.481995 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:30.482006 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:30.482024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:30.533740 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:30.533785 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:30.546789 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:30.546831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:30.622294 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:30.622321 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:30.622338 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:30.693871 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:30.693922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.236318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:33.249452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:33.249545 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:33.288064 1147424 cri.go:89] found id: ""
	I0731 21:30:33.288110 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.288124 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:33.288133 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:33.288208 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:33.321269 1147424 cri.go:89] found id: ""
	I0731 21:30:33.321298 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.321307 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:33.321313 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:33.321368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:33.357078 1147424 cri.go:89] found id: ""
	I0731 21:30:33.357125 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.357133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:33.357140 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:33.357206 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:33.393556 1147424 cri.go:89] found id: ""
	I0731 21:30:33.393587 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.393598 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:33.393608 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:33.393674 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:33.427311 1147424 cri.go:89] found id: ""
	I0731 21:30:33.427347 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.427359 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:33.427368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:33.427438 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:33.462424 1147424 cri.go:89] found id: ""
	I0731 21:30:33.462463 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.462474 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:33.462482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:33.462557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:33.499271 1147424 cri.go:89] found id: ""
	I0731 21:30:33.499302 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.499311 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:33.499320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:33.499395 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:33.536341 1147424 cri.go:89] found id: ""
	I0731 21:30:33.536372 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.536382 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:33.536392 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:33.536406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:33.606582 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:33.606621 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:33.606640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:33.682704 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:33.682757 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.722410 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:33.722456 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:33.778845 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:33.778888 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:36.293569 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:36.311120 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:36.311235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:36.350558 1147424 cri.go:89] found id: ""
	I0731 21:30:36.350589 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.350596 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:36.350602 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:36.350655 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:36.387804 1147424 cri.go:89] found id: ""
	I0731 21:30:36.387841 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.387849 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:36.387855 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:36.387912 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:36.427225 1147424 cri.go:89] found id: ""
	I0731 21:30:36.427263 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.427273 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:36.427280 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:36.427367 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:36.470864 1147424 cri.go:89] found id: ""
	I0731 21:30:36.470896 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.470908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:36.470917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:36.470985 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:36.523075 1147424 cri.go:89] found id: ""
	I0731 21:30:36.523109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.523117 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:36.523124 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:36.523188 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:36.598071 1147424 cri.go:89] found id: ""
	I0731 21:30:36.598109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.598120 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:36.598129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:36.598200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:36.638277 1147424 cri.go:89] found id: ""
	I0731 21:30:36.638314 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.638326 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:36.638335 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:36.638402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:36.673112 1147424 cri.go:89] found id: ""
	I0731 21:30:36.673152 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.673164 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:36.673180 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:36.673197 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:36.728197 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:36.728245 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:36.742034 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:36.742072 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:36.815584 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:36.815617 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:36.815635 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:36.894418 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:36.894464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.436637 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:39.449708 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:39.449823 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:39.490244 1147424 cri.go:89] found id: ""
	I0731 21:30:39.490281 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.490293 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:39.490301 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:39.490365 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:39.523568 1147424 cri.go:89] found id: ""
	I0731 21:30:39.523601 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.523625 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:39.523640 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:39.523723 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:39.558966 1147424 cri.go:89] found id: ""
	I0731 21:30:39.559004 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.559017 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:39.559025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:39.559092 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:39.592002 1147424 cri.go:89] found id: ""
	I0731 21:30:39.592037 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.592049 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:39.592058 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:39.592145 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:39.624596 1147424 cri.go:89] found id: ""
	I0731 21:30:39.624634 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.624646 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:39.624655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:39.624722 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:39.658928 1147424 cri.go:89] found id: ""
	I0731 21:30:39.658957 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.658965 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:39.658973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:39.659024 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:39.692725 1147424 cri.go:89] found id: ""
	I0731 21:30:39.692766 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.692779 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:39.692788 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:39.692857 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:39.728770 1147424 cri.go:89] found id: ""
	I0731 21:30:39.728811 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.728823 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:39.728837 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:39.728854 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:39.799162 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:39.799193 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:39.799213 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:39.884581 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:39.884625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.923650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:39.923687 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:39.977735 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:39.977787 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.491668 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:42.513530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:42.513623 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:42.563932 1147424 cri.go:89] found id: ""
	I0731 21:30:42.563968 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.563982 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:42.563991 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:42.564067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:42.598089 1147424 cri.go:89] found id: ""
	I0731 21:30:42.598122 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.598131 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:42.598138 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:42.598199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:42.631493 1147424 cri.go:89] found id: ""
	I0731 21:30:42.631528 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.631540 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:42.631549 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:42.631626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:42.668358 1147424 cri.go:89] found id: ""
	I0731 21:30:42.668395 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.668408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:42.668416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:42.668484 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:42.701115 1147424 cri.go:89] found id: ""
	I0731 21:30:42.701150 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.701161 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:42.701170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:42.701248 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:42.736626 1147424 cri.go:89] found id: ""
	I0731 21:30:42.736665 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.736678 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:42.736687 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:42.736759 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:42.769864 1147424 cri.go:89] found id: ""
	I0731 21:30:42.769897 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.769904 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:42.769910 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:42.769964 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:42.803441 1147424 cri.go:89] found id: ""
	I0731 21:30:42.803477 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.803486 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:42.803497 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:42.803514 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.817556 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:42.817591 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:42.885011 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:42.885040 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:42.885055 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:42.964799 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:42.964851 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:43.015621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:43.015675 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:45.568268 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:45.580867 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:45.580952 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:45.614028 1147424 cri.go:89] found id: ""
	I0731 21:30:45.614066 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.614076 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:45.614082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:45.614152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:45.650207 1147424 cri.go:89] found id: ""
	I0731 21:30:45.650235 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.650245 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:45.650254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:45.650321 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:45.684405 1147424 cri.go:89] found id: ""
	I0731 21:30:45.684433 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.684444 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:45.684452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:45.684540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:45.718355 1147424 cri.go:89] found id: ""
	I0731 21:30:45.718397 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.718408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:45.718416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:45.718501 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:45.755484 1147424 cri.go:89] found id: ""
	I0731 21:30:45.755532 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.755554 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:45.755563 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:45.755638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:45.791243 1147424 cri.go:89] found id: ""
	I0731 21:30:45.791277 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.791290 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:45.791298 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:45.791368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:45.827118 1147424 cri.go:89] found id: ""
	I0731 21:30:45.827157 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.827169 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:45.827177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:45.827244 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:45.866131 1147424 cri.go:89] found id: ""
	I0731 21:30:45.866166 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.866177 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:45.866191 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:45.866207 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:45.919945 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:45.919988 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:45.935650 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:45.935685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:46.008387 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:46.008417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:46.008437 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:46.087063 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:46.087119 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:48.626079 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:48.639423 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:48.639502 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:48.673340 1147424 cri.go:89] found id: ""
	I0731 21:30:48.673371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.673380 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:48.673388 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:48.673457 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:48.707662 1147424 cri.go:89] found id: ""
	I0731 21:30:48.707694 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.707704 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:48.707712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:48.707786 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:48.741679 1147424 cri.go:89] found id: ""
	I0731 21:30:48.741716 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.741728 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:48.741736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:48.741807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:48.780939 1147424 cri.go:89] found id: ""
	I0731 21:30:48.780969 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.780980 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:48.780987 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:48.781050 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:48.818882 1147424 cri.go:89] found id: ""
	I0731 21:30:48.818912 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.818920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:48.818927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:48.818982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:48.858012 1147424 cri.go:89] found id: ""
	I0731 21:30:48.858044 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.858056 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:48.858065 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:48.858140 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:48.894753 1147424 cri.go:89] found id: ""
	I0731 21:30:48.894787 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.894795 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:48.894802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:48.894863 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:48.927020 1147424 cri.go:89] found id: ""
	I0731 21:30:48.927056 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.927066 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:48.927078 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:48.927099 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:48.983634 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:48.983678 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:48.998249 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:48.998280 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:49.068981 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:49.069006 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:49.069024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:49.154613 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:49.154658 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:51.693023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:51.706145 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:51.706246 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:51.737003 1147424 cri.go:89] found id: ""
	I0731 21:30:51.737032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.737041 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:51.737046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:51.737114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:51.772405 1147424 cri.go:89] found id: ""
	I0731 21:30:51.772441 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.772452 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:51.772461 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:51.772518 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.805868 1147424 cri.go:89] found id: ""
	I0731 21:30:51.805900 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.805910 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:51.805918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:51.805986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:51.841996 1147424 cri.go:89] found id: ""
	I0731 21:30:51.842032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.842045 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:51.842054 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:51.842130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:51.874698 1147424 cri.go:89] found id: ""
	I0731 21:30:51.874734 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.874746 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:51.874755 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:51.874824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:51.908924 1147424 cri.go:89] found id: ""
	I0731 21:30:51.908955 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.908967 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:51.908973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:51.909037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:51.945056 1147424 cri.go:89] found id: ""
	I0731 21:30:51.945085 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.945096 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:51.945104 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:51.945167 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:51.979480 1147424 cri.go:89] found id: ""
	I0731 21:30:51.979513 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.979538 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:51.979552 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:51.979571 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:52.055960 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:52.055992 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:52.056009 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:52.132988 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:52.133039 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:52.172054 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:52.172098 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:52.226311 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:52.226355 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:54.741919 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:54.755241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:54.755319 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:54.789532 1147424 cri.go:89] found id: ""
	I0731 21:30:54.789563 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.789574 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:54.789583 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:54.789652 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:54.824196 1147424 cri.go:89] found id: ""
	I0731 21:30:54.824229 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.824240 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:54.824248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:54.824314 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:54.860579 1147424 cri.go:89] found id: ""
	I0731 21:30:54.860611 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.860620 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:54.860627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:54.860679 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:54.897438 1147424 cri.go:89] found id: ""
	I0731 21:30:54.897472 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.897484 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:54.897493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:54.897569 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:54.935283 1147424 cri.go:89] found id: ""
	I0731 21:30:54.935318 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.935330 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:54.935339 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:54.935409 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:54.970819 1147424 cri.go:89] found id: ""
	I0731 21:30:54.970850 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.970858 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:54.970865 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:54.970916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:55.004983 1147424 cri.go:89] found id: ""
	I0731 21:30:55.005019 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.005029 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:55.005038 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:55.005111 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:55.040711 1147424 cri.go:89] found id: ""
	I0731 21:30:55.040740 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.040749 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:55.040760 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:55.040774 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:55.117255 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:55.117290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:55.117308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:55.195423 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:55.195466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:55.234017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:55.234050 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:55.287518 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:55.287562 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:57.802888 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:57.816049 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:57.816152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:57.849582 1147424 cri.go:89] found id: ""
	I0731 21:30:57.849616 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.849627 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:57.849635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:57.849713 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:57.883334 1147424 cri.go:89] found id: ""
	I0731 21:30:57.883371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.883382 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:57.883391 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:57.883459 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:57.917988 1147424 cri.go:89] found id: ""
	I0731 21:30:57.918018 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.918028 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:57.918034 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:57.918095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:57.956169 1147424 cri.go:89] found id: ""
	I0731 21:30:57.956205 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.956217 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:57.956229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:57.956296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:57.992259 1147424 cri.go:89] found id: ""
	I0731 21:30:57.992291 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.992301 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:57.992308 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:57.992371 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:58.027969 1147424 cri.go:89] found id: ""
	I0731 21:30:58.027996 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.028006 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:58.028013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:58.028065 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:58.063018 1147424 cri.go:89] found id: ""
	I0731 21:30:58.063048 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.063057 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:58.063064 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:58.063117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:58.097096 1147424 cri.go:89] found id: ""
	I0731 21:30:58.097131 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.097143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:58.097158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:58.097175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:58.137311 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:58.137341 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:58.186533 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:58.186575 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:58.200436 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:58.200469 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:58.270006 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:58.270033 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:58.270053 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:00.855423 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:00.868032 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:00.868128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:00.901453 1147424 cri.go:89] found id: ""
	I0731 21:31:00.901486 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.901498 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:00.901506 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:00.901586 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:00.940566 1147424 cri.go:89] found id: ""
	I0731 21:31:00.940598 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.940614 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:00.940623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:00.940693 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:00.975729 1147424 cri.go:89] found id: ""
	I0731 21:31:00.975767 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.975778 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:00.975785 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:00.975852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:01.010713 1147424 cri.go:89] found id: ""
	I0731 21:31:01.010747 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.010759 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:01.010768 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:01.010842 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:01.044675 1147424 cri.go:89] found id: ""
	I0731 21:31:01.044709 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.044718 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:01.044725 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:01.044785 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:01.078574 1147424 cri.go:89] found id: ""
	I0731 21:31:01.078614 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.078625 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:01.078634 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:01.078696 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:01.116013 1147424 cri.go:89] found id: ""
	I0731 21:31:01.116051 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.116062 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:01.116071 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:01.116161 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:01.152596 1147424 cri.go:89] found id: ""
	I0731 21:31:01.152631 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.152640 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:01.152650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:01.152666 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:01.203674 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:01.203726 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:01.218212 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:01.218261 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:01.290579 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:01.290604 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:01.290621 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:01.369885 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:01.369929 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:03.910280 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:03.923195 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:03.923276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:03.958378 1147424 cri.go:89] found id: ""
	I0731 21:31:03.958411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.958420 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:03.958427 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:03.958496 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:03.993096 1147424 cri.go:89] found id: ""
	I0731 21:31:03.993128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.993139 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:03.993148 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:03.993219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:04.029519 1147424 cri.go:89] found id: ""
	I0731 21:31:04.029552 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.029561 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:04.029569 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:04.029625 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:04.065597 1147424 cri.go:89] found id: ""
	I0731 21:31:04.065633 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.065643 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:04.065652 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:04.065719 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:04.101708 1147424 cri.go:89] found id: ""
	I0731 21:31:04.101744 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.101755 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:04.101763 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:04.101835 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:04.137732 1147424 cri.go:89] found id: ""
	I0731 21:31:04.137773 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.137783 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:04.137792 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:04.137866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:04.173141 1147424 cri.go:89] found id: ""
	I0731 21:31:04.173173 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.173188 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:04.173197 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:04.173269 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:04.208707 1147424 cri.go:89] found id: ""
	I0731 21:31:04.208742 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.208753 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:04.208770 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:04.208789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:04.279384 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:04.279417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:04.279498 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:04.362158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:04.362203 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:04.401372 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:04.401412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:04.453988 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:04.454047 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:06.968373 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:06.982182 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:06.982268 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:07.018082 1147424 cri.go:89] found id: ""
	I0731 21:31:07.018112 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.018122 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:07.018129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:07.018197 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:07.050272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.050309 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.050319 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:07.050325 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:07.050392 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:07.085174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.085206 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.085215 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:07.085221 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:07.085285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:07.119239 1147424 cri.go:89] found id: ""
	I0731 21:31:07.119274 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.119282 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:07.119289 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:07.119353 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:07.156846 1147424 cri.go:89] found id: ""
	I0731 21:31:07.156876 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.156883 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:07.156889 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:07.156942 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:07.191272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.191305 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.191314 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:07.191320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:07.191384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:07.231174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.231209 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.231221 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:07.231231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:07.231295 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:07.266525 1147424 cri.go:89] found id: ""
	I0731 21:31:07.266551 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.266558 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:07.266567 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:07.266589 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:07.306626 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:07.306659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:07.360568 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:07.360625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:07.374630 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:07.374665 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:07.444054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:07.444081 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:07.444118 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:10.030591 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:10.043498 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:10.043571 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:10.076835 1147424 cri.go:89] found id: ""
	I0731 21:31:10.076875 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.076887 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:10.076897 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:10.076966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:10.111342 1147424 cri.go:89] found id: ""
	I0731 21:31:10.111384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.111396 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:10.111404 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:10.111473 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:10.146858 1147424 cri.go:89] found id: ""
	I0731 21:31:10.146896 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.146911 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:10.146920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:10.146989 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:10.180682 1147424 cri.go:89] found id: ""
	I0731 21:31:10.180717 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.180729 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:10.180738 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:10.180804 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:10.215147 1147424 cri.go:89] found id: ""
	I0731 21:31:10.215177 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.215186 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:10.215192 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:10.215249 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:10.248291 1147424 cri.go:89] found id: ""
	I0731 21:31:10.248327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.248336 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:10.248343 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:10.248398 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:10.284207 1147424 cri.go:89] found id: ""
	I0731 21:31:10.284241 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.284252 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:10.284259 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:10.284325 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:10.318286 1147424 cri.go:89] found id: ""
	I0731 21:31:10.318322 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.318331 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:10.318342 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:10.318356 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:10.368429 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:10.368476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:10.383638 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:10.383673 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:10.450696 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:10.450720 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:10.450742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:10.530413 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:10.530458 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.084947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:13.098074 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:13.098156 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:13.132915 1147424 cri.go:89] found id: ""
	I0731 21:31:13.132952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.132962 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:13.132968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:13.133037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:13.173568 1147424 cri.go:89] found id: ""
	I0731 21:31:13.173597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.173605 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:13.173612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:13.173668 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:13.207356 1147424 cri.go:89] found id: ""
	I0731 21:31:13.207388 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.207402 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:13.207411 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:13.207478 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:13.243452 1147424 cri.go:89] found id: ""
	I0731 21:31:13.243482 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.243493 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:13.243502 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:13.243587 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:13.278682 1147424 cri.go:89] found id: ""
	I0731 21:31:13.278719 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.278729 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:13.278736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:13.278794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:13.312698 1147424 cri.go:89] found id: ""
	I0731 21:31:13.312727 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.312735 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:13.312742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:13.312796 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:13.346223 1147424 cri.go:89] found id: ""
	I0731 21:31:13.346259 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.346270 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:13.346279 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:13.346350 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:13.380778 1147424 cri.go:89] found id: ""
	I0731 21:31:13.380819 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.380833 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:13.380847 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:13.380889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:13.394337 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:13.394372 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:13.472260 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:13.472290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:13.472308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:13.549561 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:13.549608 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.589373 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:13.589416 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:16.143472 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:16.155966 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:16.156039 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:16.194187 1147424 cri.go:89] found id: ""
	I0731 21:31:16.194216 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.194224 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:16.194231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:16.194299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:16.228700 1147424 cri.go:89] found id: ""
	I0731 21:31:16.228738 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.228751 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:16.228760 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:16.228844 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:16.261597 1147424 cri.go:89] found id: ""
	I0731 21:31:16.261629 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.261640 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:16.261647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:16.261716 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:16.299664 1147424 cri.go:89] found id: ""
	I0731 21:31:16.299697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.299709 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:16.299718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:16.299780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:16.350144 1147424 cri.go:89] found id: ""
	I0731 21:31:16.350172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.350181 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:16.350188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:16.350254 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:16.385259 1147424 cri.go:89] found id: ""
	I0731 21:31:16.385294 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.385303 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:16.385310 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:16.385364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:16.419555 1147424 cri.go:89] found id: ""
	I0731 21:31:16.419597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.419610 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:16.419619 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:16.419714 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:16.455956 1147424 cri.go:89] found id: ""
	I0731 21:31:16.455993 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.456005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:16.456029 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:16.456048 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:16.493234 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:16.493269 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:16.544931 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:16.544975 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:16.559513 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:16.559553 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:16.625127 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.625158 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:16.625176 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.200306 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:19.213303 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:19.213393 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:19.247139 1147424 cri.go:89] found id: ""
	I0731 21:31:19.247171 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.247179 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:19.247186 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:19.247245 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:19.282630 1147424 cri.go:89] found id: ""
	I0731 21:31:19.282659 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.282668 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:19.282674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:19.282740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:19.317287 1147424 cri.go:89] found id: ""
	I0731 21:31:19.317327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.317338 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:19.317345 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:19.317410 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:19.352680 1147424 cri.go:89] found id: ""
	I0731 21:31:19.352718 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.352738 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:19.352747 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:19.352820 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:19.385653 1147424 cri.go:89] found id: ""
	I0731 21:31:19.385697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.385709 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:19.385718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:19.385794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:19.425552 1147424 cri.go:89] found id: ""
	I0731 21:31:19.425582 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.425591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:19.425598 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:19.425654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:19.461717 1147424 cri.go:89] found id: ""
	I0731 21:31:19.461753 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.461766 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:19.461775 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:19.461852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:19.497504 1147424 cri.go:89] found id: ""
	I0731 21:31:19.497542 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.497554 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:19.497567 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:19.497592 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.571818 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:19.571867 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:19.611053 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:19.611091 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:19.662174 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:19.662220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:19.676489 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:19.676526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:19.750718 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:22.251175 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:22.265094 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:22.265186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:22.298628 1147424 cri.go:89] found id: ""
	I0731 21:31:22.298665 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.298676 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:22.298684 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:22.298754 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:22.336851 1147424 cri.go:89] found id: ""
	I0731 21:31:22.336888 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.336900 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:22.336909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:22.336982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:22.373362 1147424 cri.go:89] found id: ""
	I0731 21:31:22.373397 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.373409 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:22.373417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:22.373498 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:22.409578 1147424 cri.go:89] found id: ""
	I0731 21:31:22.409606 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.409614 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:22.409621 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:22.409675 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:22.446427 1147424 cri.go:89] found id: ""
	I0731 21:31:22.446458 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.446469 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:22.446477 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:22.446547 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:22.480629 1147424 cri.go:89] found id: ""
	I0731 21:31:22.480679 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.480691 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:22.480700 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:22.480769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:22.515017 1147424 cri.go:89] found id: ""
	I0731 21:31:22.515058 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.515070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:22.515078 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:22.515151 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:22.552433 1147424 cri.go:89] found id: ""
	I0731 21:31:22.552462 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.552470 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:22.552480 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:22.552493 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:22.567822 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:22.567862 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:22.640554 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:22.640585 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:22.640603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:22.732714 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:22.732776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:22.790478 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:22.790515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:25.352413 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:25.364739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:25.364828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:25.398119 1147424 cri.go:89] found id: ""
	I0731 21:31:25.398158 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.398171 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:25.398184 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:25.398255 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:25.432874 1147424 cri.go:89] found id: ""
	I0731 21:31:25.432908 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.432919 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:25.432928 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:25.432986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:25.467669 1147424 cri.go:89] found id: ""
	I0731 21:31:25.467702 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.467711 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:25.467717 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:25.467783 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:25.502331 1147424 cri.go:89] found id: ""
	I0731 21:31:25.502364 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.502373 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:25.502379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:25.502434 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:25.535888 1147424 cri.go:89] found id: ""
	I0731 21:31:25.535917 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.535924 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:25.535931 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:25.535990 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:25.568398 1147424 cri.go:89] found id: ""
	I0731 21:31:25.568427 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.568443 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:25.568451 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:25.568554 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:25.602724 1147424 cri.go:89] found id: ""
	I0731 21:31:25.602751 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.602759 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:25.602766 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:25.602825 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:25.635990 1147424 cri.go:89] found id: ""
	I0731 21:31:25.636021 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.636032 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:25.636045 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:25.636063 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:25.687984 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:25.688030 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:25.702979 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:25.703010 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:25.768470 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:25.768498 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:25.768519 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:25.845432 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:25.845481 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.383725 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:28.397046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:28.397130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:28.436675 1147424 cri.go:89] found id: ""
	I0731 21:31:28.436707 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.436716 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:28.436723 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:28.436780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:28.474084 1147424 cri.go:89] found id: ""
	I0731 21:31:28.474114 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.474122 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:28.474129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:28.474186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:28.512448 1147424 cri.go:89] found id: ""
	I0731 21:31:28.512485 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.512496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:28.512505 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:28.512575 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:28.557548 1147424 cri.go:89] found id: ""
	I0731 21:31:28.557579 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.557591 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:28.557599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:28.557664 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:28.600492 1147424 cri.go:89] found id: ""
	I0731 21:31:28.600526 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.600545 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:28.600553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:28.600628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:28.645067 1147424 cri.go:89] found id: ""
	I0731 21:31:28.645093 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.645101 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:28.645107 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:28.645171 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:28.678391 1147424 cri.go:89] found id: ""
	I0731 21:31:28.678431 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.678444 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:28.678452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:28.678522 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:28.712230 1147424 cri.go:89] found id: ""
	I0731 21:31:28.712260 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.712268 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:28.712278 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:28.712297 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:28.779362 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:28.779389 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:28.779403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:28.861192 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:28.861243 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.900747 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:28.900781 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:28.953135 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:28.953183 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:31.467806 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:31.481274 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:31.481345 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:31.516704 1147424 cri.go:89] found id: ""
	I0731 21:31:31.516741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.516754 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:31.516765 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:31.516824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:31.553299 1147424 cri.go:89] found id: ""
	I0731 21:31:31.553332 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.553341 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:31.553348 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:31.553402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:31.587834 1147424 cri.go:89] found id: ""
	I0731 21:31:31.587864 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.587874 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:31.587881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:31.587939 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:31.623164 1147424 cri.go:89] found id: ""
	I0731 21:31:31.623194 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.623203 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:31.623209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:31.623265 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:31.659118 1147424 cri.go:89] found id: ""
	I0731 21:31:31.659151 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.659158 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:31.659165 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:31.659219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:31.697260 1147424 cri.go:89] found id: ""
	I0731 21:31:31.697297 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.697308 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:31.697317 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:31.697375 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:31.732585 1147424 cri.go:89] found id: ""
	I0731 21:31:31.732623 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.732635 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:31.732644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:31.732698 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:31.770922 1147424 cri.go:89] found id: ""
	I0731 21:31:31.770952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.770964 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:31.770976 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:31.770992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:31.823747 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:31.823805 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:31.837367 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:31.837406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:31.912937 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:31.912958 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:31.912972 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:31.991008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:31.991061 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.528933 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:34.552722 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:34.552807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:34.587277 1147424 cri.go:89] found id: ""
	I0731 21:31:34.587315 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.587326 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:34.587337 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:34.587417 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:34.619919 1147424 cri.go:89] found id: ""
	I0731 21:31:34.619952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.619961 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:34.619968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:34.620033 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:34.654967 1147424 cri.go:89] found id: ""
	I0731 21:31:34.655000 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.655007 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:34.655014 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:34.655066 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:34.689092 1147424 cri.go:89] found id: ""
	I0731 21:31:34.689128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.689139 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:34.689147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:34.689217 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:34.725112 1147424 cri.go:89] found id: ""
	I0731 21:31:34.725145 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.725153 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:34.725159 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:34.725215 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:34.760377 1147424 cri.go:89] found id: ""
	I0731 21:31:34.760411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.760422 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:34.760430 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:34.760500 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:34.796413 1147424 cri.go:89] found id: ""
	I0731 21:31:34.796445 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.796460 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:34.796468 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:34.796540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:34.833243 1147424 cri.go:89] found id: ""
	I0731 21:31:34.833277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.833288 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:34.833309 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:34.833328 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:34.911486 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:34.911552 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.952167 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:34.952200 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:35.010995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:35.011041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:35.025756 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:35.025795 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:35.110465 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:37.610914 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:37.623848 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:37.623935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:37.660355 1147424 cri.go:89] found id: ""
	I0731 21:31:37.660384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.660392 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:37.660398 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:37.660456 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:37.694935 1147424 cri.go:89] found id: ""
	I0731 21:31:37.694966 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.694975 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:37.694982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:37.695048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:37.729438 1147424 cri.go:89] found id: ""
	I0731 21:31:37.729472 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.729485 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:37.729493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:37.729570 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:37.766412 1147424 cri.go:89] found id: ""
	I0731 21:31:37.766440 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.766449 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:37.766457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:37.766519 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:37.803830 1147424 cri.go:89] found id: ""
	I0731 21:31:37.803865 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.803875 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:37.803884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:37.803956 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:37.838698 1147424 cri.go:89] found id: ""
	I0731 21:31:37.838730 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.838741 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:37.838749 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:37.838819 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:37.873274 1147424 cri.go:89] found id: ""
	I0731 21:31:37.873312 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.873324 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:37.873332 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:37.873404 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:37.907801 1147424 cri.go:89] found id: ""
	I0731 21:31:37.907835 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.907859 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:37.907870 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:37.907893 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:37.962192 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:37.962233 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:37.976530 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:37.976577 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:38.048551 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:38.048584 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:38.048603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:38.122957 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:38.123003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:40.663623 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:40.677119 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:40.677184 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:40.710893 1147424 cri.go:89] found id: ""
	I0731 21:31:40.710923 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.710932 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:40.710939 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:40.710996 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:40.746166 1147424 cri.go:89] found id: ""
	I0731 21:31:40.746203 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.746216 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:40.746223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:40.746296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:40.789323 1147424 cri.go:89] found id: ""
	I0731 21:31:40.789353 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.789362 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:40.789368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:40.789433 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:40.826731 1147424 cri.go:89] found id: ""
	I0731 21:31:40.826766 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.826775 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:40.826782 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:40.826843 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:40.865533 1147424 cri.go:89] found id: ""
	I0731 21:31:40.865562 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.865570 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:40.865576 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:40.865628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:40.900523 1147424 cri.go:89] found id: ""
	I0731 21:31:40.900555 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.900564 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:40.900571 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:40.900628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:40.934140 1147424 cri.go:89] found id: ""
	I0731 21:31:40.934172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.934181 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:40.934187 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:40.934252 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:40.969989 1147424 cri.go:89] found id: ""
	I0731 21:31:40.970033 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.970045 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:40.970058 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:40.970076 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:41.021416 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:41.021464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:41.035947 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:41.035978 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:41.102101 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:41.102126 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:41.102141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:41.182412 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:41.182457 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:43.727586 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:43.740633 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:43.740725 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:43.775305 1147424 cri.go:89] found id: ""
	I0731 21:31:43.775343 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.775354 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:43.775363 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:43.775426 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:43.813410 1147424 cri.go:89] found id: ""
	I0731 21:31:43.813441 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.813449 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:43.813455 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:43.813510 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:43.848924 1147424 cri.go:89] found id: ""
	I0731 21:31:43.848959 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.848971 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:43.848979 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:43.849048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:43.884911 1147424 cri.go:89] found id: ""
	I0731 21:31:43.884950 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.884962 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:43.884971 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:43.885041 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:43.918244 1147424 cri.go:89] found id: ""
	I0731 21:31:43.918277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.918286 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:43.918292 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:43.918348 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:43.952166 1147424 cri.go:89] found id: ""
	I0731 21:31:43.952200 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.952211 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:43.952220 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:43.952299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:43.985756 1147424 cri.go:89] found id: ""
	I0731 21:31:43.985790 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.985850 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:43.985863 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:43.985916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:44.020480 1147424 cri.go:89] found id: ""
	I0731 21:31:44.020516 1147424 logs.go:276] 0 containers: []
	W0731 21:31:44.020528 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:44.020542 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:44.020560 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:44.058344 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:44.058398 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:44.110703 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:44.110751 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:44.124735 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:44.124771 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:44.193412 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:44.193445 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:44.193463 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:46.775651 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:46.789288 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:46.789384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.822997 1147424 cri.go:89] found id: ""
	I0731 21:31:46.823032 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.823044 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:46.823053 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:46.823123 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:46.857000 1147424 cri.go:89] found id: ""
	I0731 21:31:46.857030 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.857039 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:46.857046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:46.857112 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:46.890362 1147424 cri.go:89] found id: ""
	I0731 21:31:46.890392 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.890404 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:46.890417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:46.890483 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:46.922819 1147424 cri.go:89] found id: ""
	I0731 21:31:46.922848 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.922864 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:46.922871 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:46.922935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:46.957333 1147424 cri.go:89] found id: ""
	I0731 21:31:46.957363 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.957371 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:46.957376 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:46.957444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:46.990795 1147424 cri.go:89] found id: ""
	I0731 21:31:46.990830 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.990840 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:46.990849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:46.990922 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:47.025144 1147424 cri.go:89] found id: ""
	I0731 21:31:47.025174 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.025185 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:47.025194 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:47.025263 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:47.062624 1147424 cri.go:89] found id: ""
	I0731 21:31:47.062658 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.062667 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:47.062677 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:47.062691 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:47.112698 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:47.112742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:47.127240 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:47.127276 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:47.195034 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:47.195062 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:47.195081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:47.277532 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:47.277574 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:49.814610 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:49.828213 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:49.828291 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:49.861951 1147424 cri.go:89] found id: ""
	I0731 21:31:49.861982 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.861991 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:49.861998 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:49.862054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:49.898601 1147424 cri.go:89] found id: ""
	I0731 21:31:49.898630 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.898638 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:49.898644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:49.898711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:49.933615 1147424 cri.go:89] found id: ""
	I0731 21:31:49.933652 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.933665 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:49.933673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:49.933742 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:49.970356 1147424 cri.go:89] found id: ""
	I0731 21:31:49.970395 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.970416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:49.970425 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:49.970503 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:50.004186 1147424 cri.go:89] found id: ""
	I0731 21:31:50.004220 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.004232 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:50.004241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:50.004316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:50.037701 1147424 cri.go:89] found id: ""
	I0731 21:31:50.037741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.037753 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:50.037761 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:50.037834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:50.074358 1147424 cri.go:89] found id: ""
	I0731 21:31:50.074390 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.074399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:50.074409 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:50.074474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:50.109052 1147424 cri.go:89] found id: ""
	I0731 21:31:50.109083 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.109091 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:50.109101 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:50.109116 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:50.167891 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:50.167935 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:50.181132 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:50.181179 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:50.247835 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:50.247865 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:50.247882 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:50.328733 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:50.328779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:52.867344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:52.880275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:52.880355 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:52.913980 1147424 cri.go:89] found id: ""
	I0731 21:31:52.914015 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.914024 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:52.914030 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:52.914095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:52.947833 1147424 cri.go:89] found id: ""
	I0731 21:31:52.947866 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.947874 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:52.947880 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:52.947947 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:52.981345 1147424 cri.go:89] found id: ""
	I0731 21:31:52.981380 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.981393 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:52.981401 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:52.981470 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:53.016253 1147424 cri.go:89] found id: ""
	I0731 21:31:53.016283 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.016292 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:53.016299 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:53.016351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:53.049683 1147424 cri.go:89] found id: ""
	I0731 21:31:53.049716 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.049726 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:53.049734 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:53.049807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:53.082171 1147424 cri.go:89] found id: ""
	I0731 21:31:53.082217 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.082228 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:53.082237 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:53.082308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:53.114595 1147424 cri.go:89] found id: ""
	I0731 21:31:53.114640 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.114658 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:53.114667 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:53.114739 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:53.151612 1147424 cri.go:89] found id: ""
	I0731 21:31:53.151644 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.151672 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:53.151686 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:53.151702 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:53.203251 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:53.203293 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:53.219234 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:53.219272 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:53.290273 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:53.290292 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:53.290306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:53.367967 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:53.368023 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:55.909173 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:55.922278 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:55.922351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:55.959354 1147424 cri.go:89] found id: ""
	I0731 21:31:55.959389 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.959397 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:55.959403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:55.959467 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:55.998507 1147424 cri.go:89] found id: ""
	I0731 21:31:55.998544 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.998557 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:55.998566 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:55.998638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:56.034763 1147424 cri.go:89] found id: ""
	I0731 21:31:56.034811 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.034824 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:56.034833 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:56.034914 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:56.068685 1147424 cri.go:89] found id: ""
	I0731 21:31:56.068726 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.068737 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:56.068746 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:56.068833 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:56.105785 1147424 cri.go:89] found id: ""
	I0731 21:31:56.105824 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.105837 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:56.105845 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:56.105920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:56.142701 1147424 cri.go:89] found id: ""
	I0731 21:31:56.142732 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.142744 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:56.142752 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:56.142834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:56.177016 1147424 cri.go:89] found id: ""
	I0731 21:31:56.177064 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.177077 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:56.177089 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:56.177163 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:56.211989 1147424 cri.go:89] found id: ""
	I0731 21:31:56.212026 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.212038 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:56.212052 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:56.212070 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:56.263995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:56.264045 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:56.277535 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:56.277570 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:56.343150 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:56.343179 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:56.343199 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:56.425361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:56.425406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:58.965276 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:58.978115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:58.978190 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:59.011793 1147424 cri.go:89] found id: ""
	I0731 21:31:59.011829 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.011840 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:59.011849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:59.011921 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:59.048117 1147424 cri.go:89] found id: ""
	I0731 21:31:59.048153 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.048164 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:59.048172 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:59.048240 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:59.081955 1147424 cri.go:89] found id: ""
	I0731 21:31:59.081985 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.081996 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:59.082004 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:59.082072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:59.116269 1147424 cri.go:89] found id: ""
	I0731 21:31:59.116308 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.116321 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:59.116330 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:59.116396 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:59.152551 1147424 cri.go:89] found id: ""
	I0731 21:31:59.152580 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.152592 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:59.152599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:59.152669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:59.186708 1147424 cri.go:89] found id: ""
	I0731 21:31:59.186749 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.186758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:59.186764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:59.186830 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:59.223628 1147424 cri.go:89] found id: ""
	I0731 21:31:59.223681 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.223690 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:59.223698 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:59.223773 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:59.256867 1147424 cri.go:89] found id: ""
	I0731 21:31:59.256901 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.256913 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:59.256925 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:59.256944 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:59.307167 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:59.307209 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:59.320958 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:59.320992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:59.390776 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:59.390798 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:59.390813 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:59.467482 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:59.467534 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:02.005084 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:02.017546 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:02.017635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:02.053094 1147424 cri.go:89] found id: ""
	I0731 21:32:02.053135 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.053146 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:02.053155 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:02.053212 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:02.087483 1147424 cri.go:89] found id: ""
	I0731 21:32:02.087517 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.087535 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:02.087543 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:02.087600 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:02.123647 1147424 cri.go:89] found id: ""
	I0731 21:32:02.123685 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.123696 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:02.123706 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:02.123764 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:02.157798 1147424 cri.go:89] found id: ""
	I0731 21:32:02.157828 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.157837 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:02.157843 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:02.157899 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:02.190266 1147424 cri.go:89] found id: ""
	I0731 21:32:02.190297 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.190309 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:02.190318 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:02.190377 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:02.232507 1147424 cri.go:89] found id: ""
	I0731 21:32:02.232537 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.232546 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:02.232552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:02.232605 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:02.270105 1147424 cri.go:89] found id: ""
	I0731 21:32:02.270133 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.270144 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:02.270152 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:02.270221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:02.304599 1147424 cri.go:89] found id: ""
	I0731 21:32:02.304631 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.304642 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:02.304654 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:02.304671 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:02.356686 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:02.356727 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:02.370114 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:02.370147 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:02.437753 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:02.437778 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:02.437797 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:02.518085 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:02.518131 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:05.071289 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:05.084496 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:05.084579 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:05.124178 1147424 cri.go:89] found id: ""
	I0731 21:32:05.124208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.124218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:05.124224 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:05.124279 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:05.162119 1147424 cri.go:89] found id: ""
	I0731 21:32:05.162155 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.162167 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:05.162173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:05.162237 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:05.198445 1147424 cri.go:89] found id: ""
	I0731 21:32:05.198483 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.198496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:05.198504 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:05.198615 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:05.240678 1147424 cri.go:89] found id: ""
	I0731 21:32:05.240702 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.240711 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:05.240718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:05.240770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:05.276910 1147424 cri.go:89] found id: ""
	I0731 21:32:05.276942 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.276965 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:05.276974 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:05.277051 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:05.310130 1147424 cri.go:89] found id: ""
	I0731 21:32:05.310158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.310166 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:05.310173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:05.310227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:05.345144 1147424 cri.go:89] found id: ""
	I0731 21:32:05.345179 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.345191 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:05.345199 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:05.345267 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:05.386723 1147424 cri.go:89] found id: ""
	I0731 21:32:05.386766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.386778 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:05.386792 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:05.386809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:05.425852 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:05.425887 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:05.482401 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:05.482447 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:05.495888 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:05.495918 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:05.562121 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:05.562153 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:05.562174 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.140837 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:08.153503 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:08.153585 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:08.187113 1147424 cri.go:89] found id: ""
	I0731 21:32:08.187143 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.187155 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:08.187164 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:08.187226 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:08.219853 1147424 cri.go:89] found id: ""
	I0731 21:32:08.219888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.219898 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:08.219906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:08.219976 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:08.253817 1147424 cri.go:89] found id: ""
	I0731 21:32:08.253848 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.253857 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:08.253864 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:08.253930 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:08.307069 1147424 cri.go:89] found id: ""
	I0731 21:32:08.307096 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.307104 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:08.307111 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:08.307176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:08.349604 1147424 cri.go:89] found id: ""
	I0731 21:32:08.349632 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.349641 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:08.349648 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:08.349711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:08.382966 1147424 cri.go:89] found id: ""
	I0731 21:32:08.383000 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.383013 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:08.383022 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:08.383080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:08.416904 1147424 cri.go:89] found id: ""
	I0731 21:32:08.416938 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.416950 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:08.416958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:08.417021 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:08.451024 1147424 cri.go:89] found id: ""
	I0731 21:32:08.451061 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.451074 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:08.451087 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:08.451103 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.530394 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:08.530441 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:08.567554 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:08.567583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:08.616162 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:08.616208 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:08.629228 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:08.629264 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:08.700820 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:11.201091 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:11.213847 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:11.213920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:11.248925 1147424 cri.go:89] found id: ""
	I0731 21:32:11.248963 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.248974 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:11.248982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:11.249054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:11.286134 1147424 cri.go:89] found id: ""
	I0731 21:32:11.286168 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.286185 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:11.286193 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:11.286261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:11.321493 1147424 cri.go:89] found id: ""
	I0731 21:32:11.321524 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.321534 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:11.321542 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:11.321610 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:11.356679 1147424 cri.go:89] found id: ""
	I0731 21:32:11.356708 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.356724 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:11.356731 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:11.356788 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:11.390757 1147424 cri.go:89] found id: ""
	I0731 21:32:11.390785 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.390795 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:11.390802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:11.390868 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:11.424687 1147424 cri.go:89] found id: ""
	I0731 21:32:11.424724 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.424736 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:11.424745 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:11.424816 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:11.458542 1147424 cri.go:89] found id: ""
	I0731 21:32:11.458579 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.458590 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:11.458599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:11.458678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:11.490956 1147424 cri.go:89] found id: ""
	I0731 21:32:11.490999 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.491009 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:11.491020 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:11.491036 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:11.541013 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:11.541057 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:11.554729 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:11.554760 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:11.619828 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:11.619868 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:11.619894 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:11.697785 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:11.697837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.235153 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:14.247701 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:14.247770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:14.282802 1147424 cri.go:89] found id: ""
	I0731 21:32:14.282835 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.282846 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:14.282854 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:14.282926 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:14.316106 1147424 cri.go:89] found id: ""
	I0731 21:32:14.316158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.316168 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:14.316175 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:14.316235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:14.349319 1147424 cri.go:89] found id: ""
	I0731 21:32:14.349358 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.349370 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:14.349379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:14.349446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:14.385630 1147424 cri.go:89] found id: ""
	I0731 21:32:14.385665 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.385674 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:14.385681 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:14.385745 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:14.422054 1147424 cri.go:89] found id: ""
	I0731 21:32:14.422090 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.422104 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:14.422113 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:14.422176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:14.456170 1147424 cri.go:89] found id: ""
	I0731 21:32:14.456207 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.456216 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:14.456223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:14.456283 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:14.489571 1147424 cri.go:89] found id: ""
	I0731 21:32:14.489611 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.489622 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:14.489632 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:14.489709 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:14.524764 1147424 cri.go:89] found id: ""
	I0731 21:32:14.524803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.524814 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:14.524827 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:14.524843 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:14.598487 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:14.598511 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:14.598526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:14.675912 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:14.675954 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.722740 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:14.722778 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:14.780558 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:14.780604 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:17.300221 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:17.313242 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:17.313309 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:17.349244 1147424 cri.go:89] found id: ""
	I0731 21:32:17.349276 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.349284 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:17.349293 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:17.349364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:17.382158 1147424 cri.go:89] found id: ""
	I0731 21:32:17.382188 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.382196 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:17.382203 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:17.382276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:17.416250 1147424 cri.go:89] found id: ""
	I0731 21:32:17.416283 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.416295 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:17.416304 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:17.416363 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:17.449192 1147424 cri.go:89] found id: ""
	I0731 21:32:17.449229 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.449240 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:17.449249 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:17.449316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:17.482189 1147424 cri.go:89] found id: ""
	I0731 21:32:17.482223 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.482235 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:17.482244 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:17.482308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:17.516284 1147424 cri.go:89] found id: ""
	I0731 21:32:17.516312 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.516320 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:17.516327 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:17.516380 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:17.550025 1147424 cri.go:89] found id: ""
	I0731 21:32:17.550059 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.550070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:17.550077 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:17.550142 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:17.582378 1147424 cri.go:89] found id: ""
	I0731 21:32:17.582411 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.582424 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:17.582488 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:17.582513 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:17.635593 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:17.635640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:17.649694 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:17.649734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:17.716275 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:17.716301 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:17.716316 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:17.800261 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:17.800327 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:20.339222 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:20.353494 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:20.353574 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:20.387397 1147424 cri.go:89] found id: ""
	I0731 21:32:20.387432 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.387441 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:20.387449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:20.387534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:20.421038 1147424 cri.go:89] found id: ""
	I0731 21:32:20.421074 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.421082 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:20.421088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:20.421200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:20.461171 1147424 cri.go:89] found id: ""
	I0731 21:32:20.461208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.461221 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:20.461229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:20.461297 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:20.529655 1147424 cri.go:89] found id: ""
	I0731 21:32:20.529692 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.529704 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:20.529712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:20.529779 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:20.584293 1147424 cri.go:89] found id: ""
	I0731 21:32:20.584327 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.584337 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:20.584344 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:20.584399 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:20.617177 1147424 cri.go:89] found id: ""
	I0731 21:32:20.617209 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.617220 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:20.617226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:20.617282 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:20.657058 1147424 cri.go:89] found id: ""
	I0731 21:32:20.657094 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.657104 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:20.657112 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:20.657181 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:20.689987 1147424 cri.go:89] found id: ""
	I0731 21:32:20.690016 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.690026 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:20.690038 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:20.690058 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:20.702274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:20.702310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:20.766054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:20.766088 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:20.766106 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:20.850776 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:20.850823 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:20.888735 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:20.888766 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.440658 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:23.453529 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:23.453616 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:23.487210 1147424 cri.go:89] found id: ""
	I0731 21:32:23.487249 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.487263 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:23.487271 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:23.487338 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:23.520656 1147424 cri.go:89] found id: ""
	I0731 21:32:23.520697 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.520709 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:23.520718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:23.520794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:23.557952 1147424 cri.go:89] found id: ""
	I0731 21:32:23.557982 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.557991 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:23.557999 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:23.558052 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:23.591428 1147424 cri.go:89] found id: ""
	I0731 21:32:23.591458 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.591466 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:23.591473 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:23.591537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:23.624978 1147424 cri.go:89] found id: ""
	I0731 21:32:23.625009 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.625019 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:23.625026 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:23.625080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:23.659424 1147424 cri.go:89] found id: ""
	I0731 21:32:23.659460 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.659473 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:23.659482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:23.659557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:23.696695 1147424 cri.go:89] found id: ""
	I0731 21:32:23.696733 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.696745 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:23.696753 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:23.696818 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:23.734067 1147424 cri.go:89] found id: ""
	I0731 21:32:23.734097 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.734106 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:23.734116 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:23.734130 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.787432 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:23.787476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:23.801116 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:23.801154 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:23.867801 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:23.867840 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:23.867859 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:23.952393 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:23.952435 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:26.490759 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:26.503050 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:26.503120 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:26.536191 1147424 cri.go:89] found id: ""
	I0731 21:32:26.536239 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.536251 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:26.536260 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:26.536330 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:26.571038 1147424 cri.go:89] found id: ""
	I0731 21:32:26.571075 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.571088 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:26.571096 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:26.571164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:26.605295 1147424 cri.go:89] found id: ""
	I0731 21:32:26.605333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.605346 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:26.605355 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:26.605422 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:26.644430 1147424 cri.go:89] found id: ""
	I0731 21:32:26.644472 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.644482 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:26.644489 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:26.644553 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:26.675985 1147424 cri.go:89] found id: ""
	I0731 21:32:26.676020 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.676033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:26.676041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:26.676128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:26.707738 1147424 cri.go:89] found id: ""
	I0731 21:32:26.707766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.707780 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:26.707787 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:26.707850 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:26.743969 1147424 cri.go:89] found id: ""
	I0731 21:32:26.743998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.744007 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:26.744013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:26.744067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:26.782301 1147424 cri.go:89] found id: ""
	I0731 21:32:26.782333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.782346 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:26.782361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:26.782377 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:26.818548 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:26.818580 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.870586 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:26.870632 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:26.883944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:26.883983 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:26.951603 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:26.951630 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:26.951648 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:29.527796 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:29.540627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:29.540862 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:29.575513 1147424 cri.go:89] found id: ""
	I0731 21:32:29.575544 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.575553 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:29.575559 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:29.575627 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:29.607395 1147424 cri.go:89] found id: ""
	I0731 21:32:29.607425 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.607434 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:29.607440 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:29.607505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:29.641509 1147424 cri.go:89] found id: ""
	I0731 21:32:29.641539 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.641548 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:29.641553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:29.641604 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:29.673166 1147424 cri.go:89] found id: ""
	I0731 21:32:29.673197 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.673207 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:29.673215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:29.673285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:29.703698 1147424 cri.go:89] found id: ""
	I0731 21:32:29.703744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.703752 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:29.703759 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:29.703821 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:29.738704 1147424 cri.go:89] found id: ""
	I0731 21:32:29.738746 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.738758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:29.738767 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:29.738858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:29.771359 1147424 cri.go:89] found id: ""
	I0731 21:32:29.771388 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.771399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:29.771407 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:29.771474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:29.806579 1147424 cri.go:89] found id: ""
	I0731 21:32:29.806614 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.806625 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:29.806635 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:29.806649 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:29.857957 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:29.857994 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:29.871348 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:29.871387 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:29.942833 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:29.942864 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:29.942880 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:30.027254 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:30.027306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:32.565077 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:32.577796 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:32.577878 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:32.611725 1147424 cri.go:89] found id: ""
	I0731 21:32:32.611762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.611774 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:32.611783 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:32.611859 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:32.647901 1147424 cri.go:89] found id: ""
	I0731 21:32:32.647939 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.647951 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:32.647959 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:32.648018 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:32.681042 1147424 cri.go:89] found id: ""
	I0731 21:32:32.681073 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.681084 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:32.681091 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:32.681162 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:32.716141 1147424 cri.go:89] found id: ""
	I0731 21:32:32.716173 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.716182 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:32.716188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:32.716242 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:32.753207 1147424 cri.go:89] found id: ""
	I0731 21:32:32.753236 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.753244 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:32.753250 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:32.753301 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:32.787591 1147424 cri.go:89] found id: ""
	I0731 21:32:32.787619 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.787628 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:32.787635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:32.787717 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:32.822430 1147424 cri.go:89] found id: ""
	I0731 21:32:32.822464 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.822476 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:32.822484 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:32.822544 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:32.854566 1147424 cri.go:89] found id: ""
	I0731 21:32:32.854600 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.854609 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:32.854621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:32.854636 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:32.905256 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:32.905310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:32.918575 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:32.918607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:32.981644 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:32.981669 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:32.981685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:33.062767 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:33.062814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:35.599598 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:35.612328 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:35.612403 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:35.647395 1147424 cri.go:89] found id: ""
	I0731 21:32:35.647428 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.647439 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:35.647448 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:35.647514 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:35.682339 1147424 cri.go:89] found id: ""
	I0731 21:32:35.682370 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.682378 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:35.682384 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:35.682440 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:35.721727 1147424 cri.go:89] found id: ""
	I0731 21:32:35.721762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.721775 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:35.721784 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:35.721866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:35.754648 1147424 cri.go:89] found id: ""
	I0731 21:32:35.754678 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.754688 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:35.754697 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:35.754761 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:35.787880 1147424 cri.go:89] found id: ""
	I0731 21:32:35.787910 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.787922 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:35.787930 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:35.788004 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:35.822619 1147424 cri.go:89] found id: ""
	I0731 21:32:35.822656 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.822668 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:35.822677 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:35.822743 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:35.856160 1147424 cri.go:89] found id: ""
	I0731 21:32:35.856198 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.856210 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:35.856219 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:35.856284 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:35.888842 1147424 cri.go:89] found id: ""
	I0731 21:32:35.888881 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.888893 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:35.888906 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:35.888924 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:35.956296 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:35.956323 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:35.956342 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:36.039485 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:36.039531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:36.081202 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:36.081247 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:36.130789 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:36.130831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:38.647723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:38.660334 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:38.660405 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:38.696782 1147424 cri.go:89] found id: ""
	I0731 21:32:38.696813 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.696822 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:38.696828 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:38.696887 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:38.731835 1147424 cri.go:89] found id: ""
	I0731 21:32:38.731874 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.731887 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:38.731895 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:38.731969 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:38.768894 1147424 cri.go:89] found id: ""
	I0731 21:32:38.768924 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.768935 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:38.768943 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:38.769012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:38.802331 1147424 cri.go:89] found id: ""
	I0731 21:32:38.802361 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.802370 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:38.802377 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:38.802430 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:38.835822 1147424 cri.go:89] found id: ""
	I0731 21:32:38.835852 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.835864 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:38.835881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:38.835940 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:38.869104 1147424 cri.go:89] found id: ""
	I0731 21:32:38.869141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.869153 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:38.869162 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:38.869234 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:38.907732 1147424 cri.go:89] found id: ""
	I0731 21:32:38.907769 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.907781 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:38.907789 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:38.907858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:38.942961 1147424 cri.go:89] found id: ""
	I0731 21:32:38.942994 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.943005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:38.943017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:38.943032 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:38.997537 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:38.997584 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:39.011711 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:39.011745 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:39.082834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:39.082861 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:39.082878 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:39.168702 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:39.168758 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:41.706713 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:41.720209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:41.720298 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:41.752969 1147424 cri.go:89] found id: ""
	I0731 21:32:41.753005 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.753016 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:41.753025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:41.753095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:41.786502 1147424 cri.go:89] found id: ""
	I0731 21:32:41.786542 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.786555 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:41.786564 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:41.786635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:41.819958 1147424 cri.go:89] found id: ""
	I0731 21:32:41.819989 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.820000 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:41.820008 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:41.820073 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:41.855104 1147424 cri.go:89] found id: ""
	I0731 21:32:41.855141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.855153 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:41.855161 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:41.855228 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:41.889375 1147424 cri.go:89] found id: ""
	I0731 21:32:41.889413 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.889423 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:41.889429 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:41.889505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:41.925172 1147424 cri.go:89] found id: ""
	I0731 21:32:41.925199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.925208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:41.925215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:41.925278 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:41.960951 1147424 cri.go:89] found id: ""
	I0731 21:32:41.960995 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.961009 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:41.961017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:41.961086 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:41.996458 1147424 cri.go:89] found id: ""
	I0731 21:32:41.996493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.996506 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:41.996519 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:41.996537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:42.048841 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:42.048889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:42.062235 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:42.062271 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:42.131510 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:42.131536 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:42.131551 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:42.216993 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:42.217035 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:44.756236 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:44.769719 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:44.769800 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:44.808963 1147424 cri.go:89] found id: ""
	I0731 21:32:44.808998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.809009 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:44.809017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:44.809095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:44.843163 1147424 cri.go:89] found id: ""
	I0731 21:32:44.843199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.843212 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:44.843225 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:44.843287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:44.877440 1147424 cri.go:89] found id: ""
	I0731 21:32:44.877468 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.877477 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:44.877483 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:44.877537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:44.911877 1147424 cri.go:89] found id: ""
	I0731 21:32:44.911906 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.911915 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:44.911922 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:44.911974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:44.945516 1147424 cri.go:89] found id: ""
	I0731 21:32:44.945547 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.945558 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:44.945565 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:44.945634 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:44.983858 1147424 cri.go:89] found id: ""
	I0731 21:32:44.983890 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.983898 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:44.983906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:44.983981 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:45.017030 1147424 cri.go:89] found id: ""
	I0731 21:32:45.017064 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.017075 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:45.017084 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:45.017154 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:45.051005 1147424 cri.go:89] found id: ""
	I0731 21:32:45.051040 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.051053 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:45.051064 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:45.051077 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:45.100602 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:45.100646 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:45.113843 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:45.113891 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:45.187725 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:45.187760 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:45.187779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:45.273549 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:45.273588 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:47.813567 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:47.826674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:47.826762 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:47.863746 1147424 cri.go:89] found id: ""
	I0731 21:32:47.863781 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.863789 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:47.863797 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:47.863860 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:47.901125 1147424 cri.go:89] found id: ""
	I0731 21:32:47.901158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.901169 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:47.901177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:47.901247 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:47.936510 1147424 cri.go:89] found id: ""
	I0731 21:32:47.936543 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.936553 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:47.936560 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:47.936618 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:47.972712 1147424 cri.go:89] found id: ""
	I0731 21:32:47.972744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.972754 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:47.972764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:47.972828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:48.007785 1147424 cri.go:89] found id: ""
	I0731 21:32:48.007818 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.007831 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:48.007839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:48.007907 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:48.045821 1147424 cri.go:89] found id: ""
	I0731 21:32:48.045851 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.045863 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:48.045872 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:48.045945 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:48.083790 1147424 cri.go:89] found id: ""
	I0731 21:32:48.083823 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.083832 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:48.083839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:48.083903 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:48.122430 1147424 cri.go:89] found id: ""
	I0731 21:32:48.122465 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.122477 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:48.122490 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:48.122505 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:48.200081 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:48.200140 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:48.240500 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:48.240537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:48.292336 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:48.292393 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:48.305398 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:48.305431 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:48.381327 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:50.881554 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:50.894655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:50.894740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:50.928819 1147424 cri.go:89] found id: ""
	I0731 21:32:50.928861 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.928873 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:50.928882 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:50.928950 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:50.962856 1147424 cri.go:89] found id: ""
	I0731 21:32:50.962897 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.962908 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:50.962917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:50.962980 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:50.995765 1147424 cri.go:89] found id: ""
	I0731 21:32:50.995803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.995815 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:50.995823 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:50.995892 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:51.034418 1147424 cri.go:89] found id: ""
	I0731 21:32:51.034454 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.034467 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:51.034476 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:51.034534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:51.070687 1147424 cri.go:89] found id: ""
	I0731 21:32:51.070723 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.070732 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:51.070739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:51.070828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:51.106934 1147424 cri.go:89] found id: ""
	I0731 21:32:51.106959 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.106966 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:51.106973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:51.107026 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:51.143489 1147424 cri.go:89] found id: ""
	I0731 21:32:51.143513 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.143522 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:51.143530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:51.143591 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:51.180778 1147424 cri.go:89] found id: ""
	I0731 21:32:51.180806 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.180816 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:51.180827 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:51.180842 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:51.194695 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:51.194734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:51.262172 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:51.262200 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:51.262220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:51.344678 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:51.344719 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:51.383624 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:51.383659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:53.936339 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:53.950362 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:53.950446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:53.984346 1147424 cri.go:89] found id: ""
	I0731 21:32:53.984376 1147424 logs.go:276] 0 containers: []
	W0731 21:32:53.984391 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:53.984403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:53.984481 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:54.019937 1147424 cri.go:89] found id: ""
	I0731 21:32:54.019973 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.019986 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:54.019994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:54.020070 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:54.056068 1147424 cri.go:89] found id: ""
	I0731 21:32:54.056120 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.056133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:54.056142 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:54.056221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:54.094375 1147424 cri.go:89] found id: ""
	I0731 21:32:54.094407 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.094416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:54.094422 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:54.094486 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:54.130326 1147424 cri.go:89] found id: ""
	I0731 21:32:54.130362 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.130374 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:54.130383 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:54.130444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:54.168190 1147424 cri.go:89] found id: ""
	I0731 21:32:54.168228 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.168239 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:54.168248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:54.168329 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:54.201946 1147424 cri.go:89] found id: ""
	I0731 21:32:54.201979 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.201988 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:54.201994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:54.202055 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:54.233852 1147424 cri.go:89] found id: ""
	I0731 21:32:54.233888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.233896 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:54.233907 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:54.233922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:54.287620 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:54.287664 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:54.309984 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:54.310019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:54.382751 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:54.382774 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:54.382789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:54.460042 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:54.460105 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:57.002945 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:57.015673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:57.015763 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:57.049464 1147424 cri.go:89] found id: ""
	I0731 21:32:57.049493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.049502 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:57.049509 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:57.049561 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:57.083326 1147424 cri.go:89] found id: ""
	I0731 21:32:57.083356 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.083365 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:57.083371 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:57.083431 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:57.115103 1147424 cri.go:89] found id: ""
	I0731 21:32:57.115132 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.115141 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:57.115147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:57.115200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:57.153178 1147424 cri.go:89] found id: ""
	I0731 21:32:57.153214 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.153226 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:57.153234 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:57.153310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:57.187940 1147424 cri.go:89] found id: ""
	I0731 21:32:57.187980 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.187992 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:57.188001 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:57.188072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:57.221825 1147424 cri.go:89] found id: ""
	I0731 21:32:57.221858 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.221868 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:57.221884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:57.221948 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:57.255087 1147424 cri.go:89] found id: ""
	I0731 21:32:57.255115 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.255128 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:57.255137 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:57.255207 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:57.290095 1147424 cri.go:89] found id: ""
	I0731 21:32:57.290131 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.290143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:57.290157 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:57.290175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:57.343777 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:57.343819 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:57.356944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:57.356981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:57.431220 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:57.431248 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:57.431267 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:57.518079 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:57.518123 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:00.056208 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:00.069424 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:00.069511 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:00.105855 1147424 cri.go:89] found id: ""
	I0731 21:33:00.105891 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.105902 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:00.105909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:00.105984 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:00.143079 1147424 cri.go:89] found id: ""
	I0731 21:33:00.143109 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.143120 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:00.143128 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:00.143195 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:00.178114 1147424 cri.go:89] found id: ""
	I0731 21:33:00.178150 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.178162 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:00.178171 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:00.178235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:00.212518 1147424 cri.go:89] found id: ""
	I0731 21:33:00.212547 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.212556 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:00.212562 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:00.212626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:00.246653 1147424 cri.go:89] found id: ""
	I0731 21:33:00.246683 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.246693 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:00.246702 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:00.246795 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:00.280163 1147424 cri.go:89] found id: ""
	I0731 21:33:00.280196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.280208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:00.280216 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:00.280285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:00.313593 1147424 cri.go:89] found id: ""
	I0731 21:33:00.313622 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.313631 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:00.313637 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:00.313691 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:00.347809 1147424 cri.go:89] found id: ""
	I0731 21:33:00.347838 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.347846 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:00.347858 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:00.347870 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:00.360481 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:00.360515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:00.433834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:00.433855 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:00.433869 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:00.513679 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:00.513721 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:00.551415 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:00.551466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.101928 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:03.114183 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:03.114262 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:03.152397 1147424 cri.go:89] found id: ""
	I0731 21:33:03.152427 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.152442 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:03.152449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:03.152505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:03.186595 1147424 cri.go:89] found id: ""
	I0731 21:33:03.186626 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.186640 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:03.186647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:03.186700 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:03.219085 1147424 cri.go:89] found id: ""
	I0731 21:33:03.219116 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.219126 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:03.219135 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:03.219201 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:03.251541 1147424 cri.go:89] found id: ""
	I0731 21:33:03.251573 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.251583 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:03.251592 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:03.251660 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:03.287880 1147424 cri.go:89] found id: ""
	I0731 21:33:03.287911 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.287920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:03.287927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:03.287992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:03.320317 1147424 cri.go:89] found id: ""
	I0731 21:33:03.320352 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.320361 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:03.320367 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:03.320423 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:03.355185 1147424 cri.go:89] found id: ""
	I0731 21:33:03.355213 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.355222 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:03.355228 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:03.355281 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:03.389900 1147424 cri.go:89] found id: ""
	I0731 21:33:03.389933 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.389941 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:03.389951 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:03.389985 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:03.427299 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:03.427331 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.480994 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:03.481037 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:03.494372 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:03.494403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:03.565542 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:03.565568 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:03.565583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:06.146397 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:06.159705 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:06.159791 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:06.195594 1147424 cri.go:89] found id: ""
	I0731 21:33:06.195628 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.195640 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:06.195649 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:06.195726 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:06.230163 1147424 cri.go:89] found id: ""
	I0731 21:33:06.230216 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.230229 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:06.230239 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:06.230313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:06.266937 1147424 cri.go:89] found id: ""
	I0731 21:33:06.266968 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.266979 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:06.266986 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:06.267048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:06.299791 1147424 cri.go:89] found id: ""
	I0731 21:33:06.299828 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.299838 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:06.299849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:06.299906 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:06.333861 1147424 cri.go:89] found id: ""
	I0731 21:33:06.333900 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.333912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:06.333920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:06.333991 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:06.366156 1147424 cri.go:89] found id: ""
	I0731 21:33:06.366196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.366208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:06.366217 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:06.366292 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:06.400567 1147424 cri.go:89] found id: ""
	I0731 21:33:06.400598 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.400607 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:06.400613 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:06.400665 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:06.443745 1147424 cri.go:89] found id: ""
	I0731 21:33:06.443771 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.443782 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:06.443794 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:06.443809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:06.530140 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:06.530189 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.570842 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:06.570883 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:06.621760 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:06.621800 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:06.636562 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:06.636602 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:06.702451 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.203607 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:09.215590 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:09.215678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:09.253063 1147424 cri.go:89] found id: ""
	I0731 21:33:09.253092 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.253101 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:09.253108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:09.253159 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:09.287000 1147424 cri.go:89] found id: ""
	I0731 21:33:09.287036 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.287051 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:09.287060 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:09.287117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:09.321173 1147424 cri.go:89] found id: ""
	I0731 21:33:09.321211 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.321223 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:09.321232 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:09.321287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:09.356860 1147424 cri.go:89] found id: ""
	I0731 21:33:09.356896 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.356908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:09.356918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:09.356979 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:09.390469 1147424 cri.go:89] found id: ""
	I0731 21:33:09.390509 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.390520 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:09.390528 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:09.390601 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:09.426265 1147424 cri.go:89] found id: ""
	I0731 21:33:09.426295 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.426304 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:09.426311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:09.426376 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:09.460197 1147424 cri.go:89] found id: ""
	I0731 21:33:09.460234 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.460246 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:09.460254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:09.460313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:09.492708 1147424 cri.go:89] found id: ""
	I0731 21:33:09.492737 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.492745 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:09.492757 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:09.492769 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:09.543768 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:09.543814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:09.557496 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:09.557531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:09.622956 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.622994 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:09.623012 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:09.700157 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:09.700202 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:12.238767 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:12.258742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:12.258829 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:12.319452 1147424 cri.go:89] found id: ""
	I0731 21:33:12.319501 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.319514 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:12.319523 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:12.319596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:12.353740 1147424 cri.go:89] found id: ""
	I0731 21:33:12.353777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.353789 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:12.353798 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:12.353872 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:12.387735 1147424 cri.go:89] found id: ""
	I0731 21:33:12.387777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.387790 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:12.387799 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:12.387864 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:12.420145 1147424 cri.go:89] found id: ""
	I0731 21:33:12.420184 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.420196 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:12.420204 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:12.420261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:12.454861 1147424 cri.go:89] found id: ""
	I0731 21:33:12.454899 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.454912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:12.454920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:12.454993 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:12.487910 1147424 cri.go:89] found id: ""
	I0731 21:33:12.487938 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.487946 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:12.487954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:12.488007 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:12.524634 1147424 cri.go:89] found id: ""
	I0731 21:33:12.524663 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.524672 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:12.524678 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:12.524747 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:12.557542 1147424 cri.go:89] found id: ""
	I0731 21:33:12.557572 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.557581 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:12.557592 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:12.557605 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:12.638725 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:12.638767 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:12.675009 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:12.675041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:12.725508 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:12.725556 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:12.739281 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:12.739315 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:12.809186 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:15.310278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:15.323392 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:15.323489 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:15.356737 1147424 cri.go:89] found id: ""
	I0731 21:33:15.356768 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.356779 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:15.356794 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:15.356870 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:15.389979 1147424 cri.go:89] found id: ""
	I0731 21:33:15.390018 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.390027 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:15.390033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:15.390097 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:15.422777 1147424 cri.go:89] found id: ""
	I0731 21:33:15.422810 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.422818 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:15.422825 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:15.422880 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:15.457962 1147424 cri.go:89] found id: ""
	I0731 21:33:15.458000 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.458012 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:15.458021 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:15.458088 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:15.495495 1147424 cri.go:89] found id: ""
	I0731 21:33:15.495528 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.495539 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:15.495552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:15.495611 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:15.528671 1147424 cri.go:89] found id: ""
	I0731 21:33:15.528700 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.528709 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:15.528715 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:15.528782 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:15.562579 1147424 cri.go:89] found id: ""
	I0731 21:33:15.562609 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.562617 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:15.562623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:15.562688 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:15.597326 1147424 cri.go:89] found id: ""
	I0731 21:33:15.597362 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.597374 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:15.597387 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:15.597406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:15.611017 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:15.611049 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:15.679729 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:15.679756 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:15.679776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:15.763719 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:15.763764 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:15.801974 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:15.802003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.350340 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:18.362952 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:18.363030 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:18.396153 1147424 cri.go:89] found id: ""
	I0731 21:33:18.396207 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.396218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:18.396227 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:18.396300 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:18.429261 1147424 cri.go:89] found id: ""
	I0731 21:33:18.429291 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.429302 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:18.429311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:18.429386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:18.462056 1147424 cri.go:89] found id: ""
	I0731 21:33:18.462093 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.462105 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:18.462115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:18.462189 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:18.494847 1147424 cri.go:89] found id: ""
	I0731 21:33:18.494887 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.494900 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:18.494908 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:18.494974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:18.527982 1147424 cri.go:89] found id: ""
	I0731 21:33:18.528020 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.528033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:18.528041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:18.528137 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:18.562114 1147424 cri.go:89] found id: ""
	I0731 21:33:18.562148 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.562159 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:18.562168 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:18.562227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:18.600226 1147424 cri.go:89] found id: ""
	I0731 21:33:18.600256 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.600267 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:18.600275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:18.600346 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:18.635899 1147424 cri.go:89] found id: ""
	I0731 21:33:18.635935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.635947 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:18.635960 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:18.635976 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.687338 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:18.687380 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:18.700274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:18.700308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:18.772852 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:18.772882 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:18.772900 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:18.854876 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:18.854919 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:21.392589 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:21.405646 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:21.405767 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:21.441055 1147424 cri.go:89] found id: ""
	I0731 21:33:21.441088 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.441100 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:21.441108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:21.441173 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:21.474545 1147424 cri.go:89] found id: ""
	I0731 21:33:21.474583 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.474593 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:21.474599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:21.474654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:21.506004 1147424 cri.go:89] found id: ""
	I0731 21:33:21.506032 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.506041 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:21.506047 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:21.506115 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:21.539842 1147424 cri.go:89] found id: ""
	I0731 21:33:21.539880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.539893 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:21.539902 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:21.539966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:21.573913 1147424 cri.go:89] found id: ""
	I0731 21:33:21.573943 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.573951 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:21.573958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:21.574012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:21.608677 1147424 cri.go:89] found id: ""
	I0731 21:33:21.608715 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.608727 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:21.608736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:21.608811 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:21.642032 1147424 cri.go:89] found id: ""
	I0731 21:33:21.642063 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.642073 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:21.642082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:21.642146 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:21.676279 1147424 cri.go:89] found id: ""
	I0731 21:33:21.676312 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.676322 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:21.676332 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:21.676346 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:21.688928 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:21.688981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:21.757596 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:21.757620 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:21.757637 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:21.836301 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:21.836350 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:21.873553 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:21.873594 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.427756 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:24.440917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:24.440998 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:24.475902 1147424 cri.go:89] found id: ""
	I0731 21:33:24.475935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.475946 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:24.475954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:24.476031 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:24.509078 1147424 cri.go:89] found id: ""
	I0731 21:33:24.509115 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.509128 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:24.509136 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:24.509205 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:24.542466 1147424 cri.go:89] found id: ""
	I0731 21:33:24.542506 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.542518 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:24.542527 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:24.542589 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:24.579457 1147424 cri.go:89] found id: ""
	I0731 21:33:24.579496 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.579515 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:24.579524 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:24.579596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:24.623843 1147424 cri.go:89] found id: ""
	I0731 21:33:24.623880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.623891 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:24.623899 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:24.623971 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:24.661401 1147424 cri.go:89] found id: ""
	I0731 21:33:24.661437 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.661448 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:24.661457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:24.661526 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:24.694521 1147424 cri.go:89] found id: ""
	I0731 21:33:24.694551 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.694559 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:24.694567 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:24.694657 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:24.730530 1147424 cri.go:89] found id: ""
	I0731 21:33:24.730566 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.730578 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:24.730591 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:24.730607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.801836 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:24.801890 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:24.817753 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:24.817803 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:24.901125 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:24.901154 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:24.901170 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:24.984008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:24.984054 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:27.533575 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:27.546174 1147424 kubeadm.go:597] duration metric: took 4m1.98040234s to restartPrimaryControlPlane
	W0731 21:33:27.546264 1147424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:27.546291 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:28.848116 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.301779163s)
	I0731 21:33:28.848201 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:28.862706 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:28.872753 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:28.882437 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:28.882467 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:28.882527 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:28.892810 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:28.892893 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:28.901944 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:28.911008 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:28.911089 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:28.920446 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.929557 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:28.929627 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.939095 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:28.948405 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:28.948478 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:28.958084 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:29.033876 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:33:29.033969 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:29.180061 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:29.180208 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:29.180304 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:29.352063 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:29.354698 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:29.354847 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:29.354944 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:29.355065 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:29.355151 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:29.355244 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:29.355344 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:29.355454 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:29.355562 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:29.355675 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:29.355800 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:29.355855 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:29.355906 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:29.657622 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:29.951029 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:30.025514 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:30.502515 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:30.518575 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:30.520148 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:30.520332 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:30.670223 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:30.672807 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:33:30.672945 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:30.681152 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:30.682190 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:30.683416 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:30.688543 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:34:10.689650 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:34:10.690301 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:10.690529 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:15.690878 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:15.691156 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:25.691455 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:25.691639 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:45.692895 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:45.693194 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695071 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:35:25.695336 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695369 1147424 kubeadm.go:310] 
	I0731 21:35:25.695432 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:35:25.695496 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:35:25.695506 1147424 kubeadm.go:310] 
	I0731 21:35:25.695560 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:35:25.695606 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:35:25.695752 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:35:25.695775 1147424 kubeadm.go:310] 
	I0731 21:35:25.695866 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:35:25.695914 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:35:25.695965 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:35:25.695972 1147424 kubeadm.go:310] 
	I0731 21:35:25.696064 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:35:25.696197 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:35:25.696218 1147424 kubeadm.go:310] 
	I0731 21:35:25.696389 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:35:25.696510 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:35:25.696637 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:35:25.696739 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:35:25.696761 1147424 kubeadm.go:310] 
	I0731 21:35:25.697342 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:35:25.697447 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:35:25.697582 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:35:25.697782 1147424 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:35:25.697852 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:35:31.094319 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.396429611s)
	I0731 21:35:31.094410 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:35:31.109019 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:35:31.118415 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:35:31.118447 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:35:31.118512 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:35:31.129005 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:35:31.129097 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:35:31.139701 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:35:31.149483 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:35:31.149565 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:35:31.158699 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.168151 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:35:31.168225 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.177911 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:35:31.186739 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:35:31.186821 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:35:31.196779 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:35:31.410613 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:37:27.101986 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:37:27.102135 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:37:27.103680 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:37:27.103742 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:37:27.103874 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:37:27.103971 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:37:27.104056 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:37:27.104135 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:37:27.105757 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:37:27.105851 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:37:27.105911 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:37:27.105982 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:37:27.106047 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:37:27.106126 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:37:27.106185 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:37:27.106256 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:37:27.106340 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:37:27.106446 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:37:27.106527 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:37:27.106582 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:37:27.106669 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:37:27.106747 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:37:27.106800 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:37:27.106853 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:37:27.106928 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:37:27.107053 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:37:27.107169 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:37:27.107233 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:37:27.107307 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:37:27.108810 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:37:27.108897 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:37:27.108964 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:37:27.109022 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:37:27.109090 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:37:27.109227 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:37:27.109276 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:37:27.109346 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109569 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109655 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109876 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109947 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110108 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110172 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110334 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110393 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110549 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110556 1147424 kubeadm.go:310] 
	I0731 21:37:27.110589 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:37:27.110626 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:37:27.110632 1147424 kubeadm.go:310] 
	I0731 21:37:27.110661 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:37:27.110707 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:37:27.110804 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:37:27.110816 1147424 kubeadm.go:310] 
	I0731 21:37:27.110920 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:37:27.110965 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:37:27.110999 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:37:27.111006 1147424 kubeadm.go:310] 
	I0731 21:37:27.111099 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:37:27.111173 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:37:27.111181 1147424 kubeadm.go:310] 
	I0731 21:37:27.111284 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:37:27.111357 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:37:27.111421 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:37:27.111501 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:37:27.111545 1147424 kubeadm.go:310] 
	I0731 21:37:27.111591 1147424 kubeadm.go:394] duration metric: took 8m1.593977042s to StartCluster
	I0731 21:37:27.111642 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:37:27.111732 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:37:27.151036 1147424 cri.go:89] found id: ""
	I0731 21:37:27.151080 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.151092 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:37:27.151101 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:37:27.151164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:37:27.189839 1147424 cri.go:89] found id: ""
	I0731 21:37:27.189877 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.189897 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:37:27.189906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:37:27.189975 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:37:27.224515 1147424 cri.go:89] found id: ""
	I0731 21:37:27.224553 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.224566 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:37:27.224574 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:37:27.224637 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:37:27.256890 1147424 cri.go:89] found id: ""
	I0731 21:37:27.256927 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.256939 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:37:27.256948 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:37:27.257017 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:37:27.292320 1147424 cri.go:89] found id: ""
	I0731 21:37:27.292360 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.292373 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:37:27.292380 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:37:27.292448 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:37:27.327537 1147424 cri.go:89] found id: ""
	I0731 21:37:27.327580 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.327591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:37:27.327600 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:37:27.327669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:37:27.362489 1147424 cri.go:89] found id: ""
	I0731 21:37:27.362522 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.362533 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:37:27.362541 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:37:27.362612 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:37:27.398531 1147424 cri.go:89] found id: ""
	I0731 21:37:27.398575 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.398587 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:37:27.398605 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:37:27.398625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:37:27.412082 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:37:27.412129 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:37:27.485574 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:37:27.485598 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:37:27.485615 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:37:27.602979 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:37:27.603026 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:37:27.642075 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:37:27.642108 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 21:37:27.692811 1147424 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:37:27.692868 1147424 out.go:239] * 
	* 
	W0731 21:37:27.692944 1147424 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.692968 1147424 out.go:239] * 
	* 
	W0731 21:37:27.693763 1147424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:37:27.697049 1147424 out.go:177] 
	W0731 21:37:27.698454 1147424 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.698525 1147424 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:37:27.698564 1147424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:37:27.700008 1147424 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-275462 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 2 (240.73665ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-275462 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-275462 logs -n 25: (1.665461364s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-238338                              | cert-expiration-238338       | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-018891             | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-018891                                   | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-563652            | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275462        | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-318420 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | disable-driver-mounts-318420                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:24 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-018891                  | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-018891 --memory=2200                     | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:34 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-755535  | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC | 31 Jul 24 21:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-563652                 | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275462             | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-755535       | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC | 31 Jul 24 21:34 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:27:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:27:26.030260 1148013 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:27:26.030388 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030397 1148013 out.go:304] Setting ErrFile to fd 2...
	I0731 21:27:26.030401 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030608 1148013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:27:26.031249 1148013 out.go:298] Setting JSON to false
	I0731 21:27:26.032356 1148013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18597,"bootTime":1722442649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:27:26.032418 1148013 start.go:139] virtualization: kvm guest
	I0731 21:27:26.034938 1148013 out.go:177] * [default-k8s-diff-port-755535] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:27:26.036482 1148013 notify.go:220] Checking for updates...
	I0731 21:27:26.036489 1148013 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:27:26.038147 1148013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:27:26.039588 1148013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:27:26.040948 1148013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:27:26.042283 1148013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:27:26.043447 1148013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:27:26.045210 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:27:26.045675 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.045758 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.061309 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0731 21:27:26.061780 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.062491 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.062533 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.062921 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.063189 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.063482 1148013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:27:26.063794 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.063834 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.079162 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0731 21:27:26.079645 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.080157 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.080183 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.080542 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.080745 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.118664 1148013 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:27:26.120036 1148013 start.go:297] selected driver: kvm2
	I0731 21:27:26.120101 1148013 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.120220 1148013 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:27:26.120963 1148013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.121063 1148013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:27:26.137571 1148013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:27:26.137997 1148013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:27:26.138052 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:27:26.138065 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:27:26.138143 1148013 start.go:340] cluster config:
	{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.138260 1148013 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.140210 1148013 out.go:177] * Starting "default-k8s-diff-port-755535" primary control-plane node in "default-k8s-diff-port-755535" cluster
	I0731 21:27:26.141439 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:27:26.141487 1148013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:27:26.141498 1148013 cache.go:56] Caching tarball of preloaded images
	I0731 21:27:26.141586 1148013 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:27:26.141597 1148013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:27:26.141693 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:27:26.141896 1148013 start.go:360] acquireMachinesLock for default-k8s-diff-port-755535: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:27:27.008495 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:30.080584 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:36.160478 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:39.232498 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:45.312414 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:48.384471 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:54.464384 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:57.536420 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:03.616434 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:06.688387 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:12.768424 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:15.840395 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:21.920383 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:24.992412 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:31.072430 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:34.144440 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:37.147856 1147232 start.go:364] duration metric: took 3m32.571011548s to acquireMachinesLock for "embed-certs-563652"
	I0731 21:28:37.147925 1147232 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:37.147931 1147232 fix.go:54] fixHost starting: 
	I0731 21:28:37.148287 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:37.148321 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:37.164497 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0731 21:28:37.164970 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:37.165488 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:28:37.165514 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:37.165980 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:37.166236 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:37.166440 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:28:37.168379 1147232 fix.go:112] recreateIfNeeded on embed-certs-563652: state=Stopped err=<nil>
	I0731 21:28:37.168407 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	W0731 21:28:37.168605 1147232 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:37.170589 1147232 out.go:177] * Restarting existing kvm2 VM for "embed-certs-563652" ...
	I0731 21:28:37.171953 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Start
	I0731 21:28:37.172181 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring networks are active...
	I0731 21:28:37.173124 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network default is active
	I0731 21:28:37.173407 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network mk-embed-certs-563652 is active
	I0731 21:28:37.173963 1147232 main.go:141] libmachine: (embed-certs-563652) Getting domain xml...
	I0731 21:28:37.174662 1147232 main.go:141] libmachine: (embed-certs-563652) Creating domain...
	I0731 21:28:38.412401 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting to get IP...
	I0731 21:28:38.413198 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.413705 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.413848 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.413679 1148299 retry.go:31] will retry after 259.485128ms: waiting for machine to come up
	I0731 21:28:38.675408 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.675997 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.676020 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.675947 1148299 retry.go:31] will retry after 335.618163ms: waiting for machine to come up
	I0731 21:28:39.013788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.014375 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.014410 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.014338 1148299 retry.go:31] will retry after 367.833515ms: waiting for machine to come up
	I0731 21:28:39.383927 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.384304 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.384330 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.384282 1148299 retry.go:31] will retry after 399.641643ms: waiting for machine to come up
	I0731 21:28:37.145377 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:37.145426 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.145841 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:28:37.145876 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.146110 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:28:37.147660 1146656 machine.go:97] duration metric: took 4m34.558419201s to provisionDockerMachine
	I0731 21:28:37.147745 1146656 fix.go:56] duration metric: took 4m34.586940428s for fixHost
	I0731 21:28:37.147761 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 4m34.586994448s
	W0731 21:28:37.147782 1146656 start.go:714] error starting host: provision: host is not running
	W0731 21:28:37.147896 1146656 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 21:28:37.147905 1146656 start.go:729] Will try again in 5 seconds ...
	I0731 21:28:39.785994 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.786532 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.786564 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.786477 1148299 retry.go:31] will retry after 734.925372ms: waiting for machine to come up
	I0731 21:28:40.523580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:40.523946 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:40.523976 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:40.523897 1148299 retry.go:31] will retry after 588.684081ms: waiting for machine to come up
	I0731 21:28:41.113730 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:41.114237 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:41.114269 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:41.114163 1148299 retry.go:31] will retry after 937.611465ms: waiting for machine to come up
	I0731 21:28:42.053276 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:42.053607 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:42.053631 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:42.053567 1148299 retry.go:31] will retry after 1.025772158s: waiting for machine to come up
	I0731 21:28:43.081306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:43.081710 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:43.081739 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:43.081649 1148299 retry.go:31] will retry after 1.677045484s: waiting for machine to come up
	I0731 21:28:42.148804 1146656 start.go:360] acquireMachinesLock for no-preload-018891: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:28:44.761328 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:44.761956 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:44.761982 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:44.761903 1148299 retry.go:31] will retry after 2.317638211s: waiting for machine to come up
	I0731 21:28:47.081357 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:47.081798 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:47.081821 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:47.081742 1148299 retry.go:31] will retry after 2.614024076s: waiting for machine to come up
	I0731 21:28:49.697308 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:49.697764 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:49.697788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:49.697724 1148299 retry.go:31] will retry after 2.673090887s: waiting for machine to come up
	I0731 21:28:52.372166 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:52.372536 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:52.372567 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:52.372480 1148299 retry.go:31] will retry after 3.507450288s: waiting for machine to come up
	I0731 21:28:57.157052 1147424 start.go:364] duration metric: took 3m42.182815583s to acquireMachinesLock for "old-k8s-version-275462"
	I0731 21:28:57.157149 1147424 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:57.157159 1147424 fix.go:54] fixHost starting: 
	I0731 21:28:57.157580 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:57.157635 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:57.177971 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 21:28:57.178444 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:57.179070 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:28:57.179105 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:57.179414 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:57.179640 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:28:57.179803 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetState
	I0731 21:28:57.181518 1147424 fix.go:112] recreateIfNeeded on old-k8s-version-275462: state=Stopped err=<nil>
	I0731 21:28:57.181566 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	W0731 21:28:57.181776 1147424 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:57.184336 1147424 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275462" ...
	I0731 21:28:55.884290 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.884864 1147232 main.go:141] libmachine: (embed-certs-563652) Found IP for machine: 192.168.50.203
	I0731 21:28:55.884893 1147232 main.go:141] libmachine: (embed-certs-563652) Reserving static IP address...
	I0731 21:28:55.884911 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has current primary IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.885425 1147232 main.go:141] libmachine: (embed-certs-563652) Reserved static IP address: 192.168.50.203
	I0731 21:28:55.885445 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting for SSH to be available...
	I0731 21:28:55.885479 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.885500 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | skip adding static IP to network mk-embed-certs-563652 - found existing host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"}
	I0731 21:28:55.885515 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Getting to WaitForSSH function...
	I0731 21:28:55.887696 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888052 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.888109 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888279 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH client type: external
	I0731 21:28:55.888310 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa (-rw-------)
	I0731 21:28:55.888353 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:28:55.888371 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | About to run SSH command:
	I0731 21:28:55.888387 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | exit 0
	I0731 21:28:56.012306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | SSH cmd err, output: <nil>: 
	I0731 21:28:56.012807 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetConfigRaw
	I0731 21:28:56.013549 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.016243 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.016629 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016925 1147232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/config.json ...
	I0731 21:28:56.017152 1147232 machine.go:94] provisionDockerMachine start ...
	I0731 21:28:56.017173 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.017431 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.019693 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020075 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.020124 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020296 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.020489 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020606 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020705 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.020835 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.021131 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.021143 1147232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:28:56.120421 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:28:56.120455 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.120874 1147232 buildroot.go:166] provisioning hostname "embed-certs-563652"
	I0731 21:28:56.120911 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.121185 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.124050 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124509 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.124548 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124693 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.124936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125120 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125300 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.125456 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.125645 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.125660 1147232 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-563652 && echo "embed-certs-563652" | sudo tee /etc/hostname
	I0731 21:28:56.237674 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-563652
	
	I0731 21:28:56.237709 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.240783 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.241212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241460 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.241660 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.241850 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.242009 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.242230 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.242458 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.242479 1147232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-563652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-563652/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-563652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:28:56.353104 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:56.353138 1147232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:28:56.353165 1147232 buildroot.go:174] setting up certificates
	I0731 21:28:56.353180 1147232 provision.go:84] configureAuth start
	I0731 21:28:56.353193 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.353590 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.356346 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356736 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.356767 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356921 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.359016 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359319 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.359364 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359530 1147232 provision.go:143] copyHostCerts
	I0731 21:28:56.359595 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:28:56.359605 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:28:56.359674 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:28:56.359763 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:28:56.359772 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:28:56.359795 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:28:56.359858 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:28:56.359864 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:28:56.359886 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:28:56.359961 1147232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.embed-certs-563652 san=[127.0.0.1 192.168.50.203 embed-certs-563652 localhost minikube]
	I0731 21:28:56.517263 1147232 provision.go:177] copyRemoteCerts
	I0731 21:28:56.517324 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:28:56.517355 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.519965 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520292 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.520326 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520523 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.520745 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.520956 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.521090 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:56.602671 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:28:56.626882 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:28:56.651212 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:28:56.674469 1147232 provision.go:87] duration metric: took 321.274463ms to configureAuth
	I0731 21:28:56.674505 1147232 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:28:56.674734 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:28:56.674830 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.677835 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.678215 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678375 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.678563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678741 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678898 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.679075 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.679259 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.679275 1147232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:28:56.930106 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:28:56.930136 1147232 machine.go:97] duration metric: took 912.97079ms to provisionDockerMachine
	I0731 21:28:56.930148 1147232 start.go:293] postStartSetup for "embed-certs-563652" (driver="kvm2")
	I0731 21:28:56.930159 1147232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:28:56.930177 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.930534 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:28:56.930563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.933241 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933656 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.933689 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933795 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.934062 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.934228 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.934372 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.015059 1147232 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:28:57.019339 1147232 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:28:57.019376 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:28:57.019472 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:28:57.019581 1147232 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:28:57.019680 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:28:57.029381 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:28:57.052530 1147232 start.go:296] duration metric: took 122.364505ms for postStartSetup
	I0731 21:28:57.052583 1147232 fix.go:56] duration metric: took 19.904651181s for fixHost
	I0731 21:28:57.052612 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.055423 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.055802 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.055852 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.056142 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.056343 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056494 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056668 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.056844 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:57.057017 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:57.057028 1147232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:28:57.156776 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461337.115873615
	
	I0731 21:28:57.156816 1147232 fix.go:216] guest clock: 1722461337.115873615
	I0731 21:28:57.156847 1147232 fix.go:229] Guest: 2024-07-31 21:28:57.115873615 +0000 UTC Remote: 2024-07-31 21:28:57.05258776 +0000 UTC m=+232.627404404 (delta=63.285855ms)
	I0731 21:28:57.156883 1147232 fix.go:200] guest clock delta is within tolerance: 63.285855ms
	I0731 21:28:57.156901 1147232 start.go:83] releasing machines lock for "embed-certs-563652", held for 20.008989513s
	I0731 21:28:57.156936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.157244 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:57.159882 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160307 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.160334 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160545 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161086 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161266 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161349 1147232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:28:57.161394 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.161460 1147232 ssh_runner.go:195] Run: cat /version.json
	I0731 21:28:57.161481 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.164126 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164511 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.164552 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164583 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164719 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.164942 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165001 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.165022 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.165106 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165194 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.165277 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.165369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165536 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165692 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.261717 1147232 ssh_runner.go:195] Run: systemctl --version
	I0731 21:28:57.267459 1147232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:28:57.412757 1147232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:28:57.418248 1147232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:28:57.418317 1147232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:28:57.437752 1147232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:28:57.437786 1147232 start.go:495] detecting cgroup driver to use...
	I0731 21:28:57.437874 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:28:57.456832 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:28:57.472719 1147232 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:28:57.472803 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:28:57.486630 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:28:57.500635 1147232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:28:57.626291 1147232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:28:57.775374 1147232 docker.go:233] disabling docker service ...
	I0731 21:28:57.775563 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:28:57.789797 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:28:57.803545 1147232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:28:57.944871 1147232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:28:58.088067 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:28:58.112885 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:28:58.133234 1147232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:28:58.133301 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.144149 1147232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:28:58.144234 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.154684 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.165572 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.176638 1147232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:28:58.187948 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.198949 1147232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.219594 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.230888 1147232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:28:58.241112 1147232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:28:58.241175 1147232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:28:58.255158 1147232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:28:58.265191 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:28:58.401923 1147232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:28:58.534900 1147232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:28:58.534980 1147232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:28:58.539618 1147232 start.go:563] Will wait 60s for crictl version
	I0731 21:28:58.539700 1147232 ssh_runner.go:195] Run: which crictl
	I0731 21:28:58.543605 1147232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:28:58.578544 1147232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:28:58.578653 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.608074 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.638975 1147232 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:28:58.640454 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:58.643630 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644168 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:58.644204 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644497 1147232 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 21:28:58.648555 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:28:58.661131 1147232 kubeadm.go:883] updating cluster {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:28:58.661262 1147232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:28:58.661307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:28:58.696977 1147232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:28:58.697058 1147232 ssh_runner.go:195] Run: which lz4
	I0731 21:28:58.700913 1147232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:28:58.705097 1147232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:28:58.705135 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:28:57.185854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .Start
	I0731 21:28:57.186093 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring networks are active...
	I0731 21:28:57.186915 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network default is active
	I0731 21:28:57.187268 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network mk-old-k8s-version-275462 is active
	I0731 21:28:57.187627 1147424 main.go:141] libmachine: (old-k8s-version-275462) Getting domain xml...
	I0731 21:28:57.188447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:28:58.502711 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting to get IP...
	I0731 21:28:58.503791 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.504272 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.504341 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.504250 1148436 retry.go:31] will retry after 309.193175ms: waiting for machine to come up
	I0731 21:28:58.815172 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.815690 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.815745 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.815657 1148436 retry.go:31] will retry after 271.329404ms: waiting for machine to come up
	I0731 21:28:59.089281 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.089738 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.089778 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.089705 1148436 retry.go:31] will retry after 354.250517ms: waiting for machine to come up
	I0731 21:28:59.445390 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.445869 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.445895 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.445823 1148436 retry.go:31] will retry after 434.740787ms: waiting for machine to come up
	I0731 21:29:00.142120 1147232 crio.go:462] duration metric: took 1.441232682s to copy over tarball
	I0731 21:29:00.142222 1147232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:02.454101 1147232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.311834948s)
	I0731 21:29:02.454139 1147232 crio.go:469] duration metric: took 2.311975688s to extract the tarball
	I0731 21:29:02.454150 1147232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:02.493307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:02.541225 1147232 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:02.541257 1147232 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:02.541268 1147232 kubeadm.go:934] updating node { 192.168.50.203 8443 v1.30.3 crio true true} ...
	I0731 21:29:02.541448 1147232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-563652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:02.541548 1147232 ssh_runner.go:195] Run: crio config
	I0731 21:29:02.586951 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:02.586976 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:02.586989 1147232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:02.587016 1147232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.203 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-563652 NodeName:embed-certs-563652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:02.587188 1147232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-563652"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:02.587287 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:02.598944 1147232 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:02.599041 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:02.610271 1147232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0731 21:29:02.627952 1147232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:02.644727 1147232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0731 21:29:02.661985 1147232 ssh_runner.go:195] Run: grep 192.168.50.203	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:02.665903 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:02.678010 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:02.809768 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:02.826650 1147232 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652 for IP: 192.168.50.203
	I0731 21:29:02.826682 1147232 certs.go:194] generating shared ca certs ...
	I0731 21:29:02.826704 1147232 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:02.826923 1147232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:02.826988 1147232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:02.827005 1147232 certs.go:256] generating profile certs ...
	I0731 21:29:02.827126 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/client.key
	I0731 21:29:02.827208 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key.0963b177
	I0731 21:29:02.827279 1147232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key
	I0731 21:29:02.827458 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:02.827515 1147232 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:02.827533 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:02.827563 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:02.827598 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:02.827630 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:02.827690 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:02.828735 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:02.862923 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:02.907648 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:02.950647 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:02.978032 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 21:29:03.007119 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:29:03.031483 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:03.055190 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:03.079296 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:03.102817 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:03.126115 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:03.149887 1147232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:03.167213 1147232 ssh_runner.go:195] Run: openssl version
	I0731 21:29:03.172827 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:03.183821 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188216 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188290 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.193896 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:03.204706 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:03.215687 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220061 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220148 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.226469 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:03.237668 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:03.248629 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.252962 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.253032 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.258590 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:03.269656 1147232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:03.274277 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:03.280438 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:03.286378 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:03.292717 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:03.298776 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:03.305022 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:03.311507 1147232 kubeadm.go:392] StartCluster: {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:03.311608 1147232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:03.311676 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.349359 1147232 cri.go:89] found id: ""
	I0731 21:29:03.349457 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:03.359993 1147232 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:03.360015 1147232 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:03.360058 1147232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:03.371322 1147232 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:03.372350 1147232 kubeconfig.go:125] found "embed-certs-563652" server: "https://192.168.50.203:8443"
	I0731 21:29:03.374391 1147232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:03.386008 1147232 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.203
	I0731 21:29:03.386053 1147232 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:03.386069 1147232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:03.386141 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.428902 1147232 cri.go:89] found id: ""
	I0731 21:29:03.429001 1147232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:03.445950 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:03.455917 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:03.455954 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:03.456007 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:03.465688 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:03.465757 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:03.475699 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:03.485103 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:03.485179 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:03.495141 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.504430 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:03.504532 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.514523 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:03.524199 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:03.524280 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:03.533924 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:03.546105 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:03.656770 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:28:59.882326 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.882926 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.882959 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.882880 1148436 retry.go:31] will retry after 563.345278ms: waiting for machine to come up
	I0731 21:29:00.447702 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:00.448213 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:00.448245 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:00.448155 1148436 retry.go:31] will retry after 605.062991ms: waiting for machine to come up
	I0731 21:29:01.055120 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.055541 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.055564 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.055484 1148436 retry.go:31] will retry after 781.785142ms: waiting for machine to come up
	I0731 21:29:01.838536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.839123 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.839148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.839075 1148436 retry.go:31] will retry after 1.037287171s: waiting for machine to come up
	I0731 21:29:02.878421 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:02.878828 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:02.878860 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:02.878794 1148436 retry.go:31] will retry after 1.796829213s: waiting for machine to come up
	I0731 21:29:04.677338 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:04.677928 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:04.677963 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:04.677848 1148436 retry.go:31] will retry after 2.083632912s: waiting for machine to come up
	I0731 21:29:04.982138 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.325308339s)
	I0731 21:29:04.982177 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.196591 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.261920 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.343027 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:05.343137 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:05.844024 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.344246 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.360837 1147232 api_server.go:72] duration metric: took 1.017810929s to wait for apiserver process to appear ...
	I0731 21:29:06.360880 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:06.360916 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:06.361563 1147232 api_server.go:269] stopped: https://192.168.50.203:8443/healthz: Get "https://192.168.50.203:8443/healthz": dial tcp 192.168.50.203:8443: connect: connection refused
	I0731 21:29:06.861091 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.297633 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.297674 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.297691 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.335524 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.335568 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.361820 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.374624 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.374671 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:06.764436 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:06.764979 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:06.765012 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:06.764918 1148436 retry.go:31] will retry after 2.092811182s: waiting for machine to come up
	I0731 21:29:08.860056 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:08.860536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:08.860571 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:08.860498 1148436 retry.go:31] will retry after 2.731015709s: waiting for machine to come up
	I0731 21:29:09.861443 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.865941 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.865978 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.361710 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.365984 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:10.366014 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.861702 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.866015 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:29:10.872799 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:10.872831 1147232 api_server.go:131] duration metric: took 4.511944174s to wait for apiserver health ...
	I0731 21:29:10.872842 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:10.872848 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:10.874719 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:10.876229 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:10.886256 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:10.903893 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:10.913974 1147232 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:10.914021 1147232 system_pods.go:61] "coredns-7db6d8ff4d-kscsg" [260d2d5f-fd44-4a0a-813b-fab424728e55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:10.914031 1147232 system_pods.go:61] "etcd-embed-certs-563652" [e278abd0-801d-4156-bcc4-8f0d35a34b2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:10.914045 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [1398c865-6871-45c2-ad93-45b629d1d3c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:10.914055 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [0fbefc31-9024-41cb-b56a-944add33a901] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:10.914066 1147232 system_pods.go:61] "kube-proxy-m4www" [cb2d9b36-d71f-4986-9fb1-547e76fd2e77] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:10.914076 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [15887051-7657-4bf6-a9ca-3d834d8eb4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:10.914089 1147232 system_pods.go:61] "metrics-server-569cc877fc-6jkw9" [eb41d2c6-c267-486d-83eb-25e5578b1e6e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:10.914100 1147232 system_pods.go:61] "storage-provisioner" [5fc70da7-6dac-4e44-865c-495fd5fec485] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:10.914112 1147232 system_pods.go:74] duration metric: took 10.188078ms to wait for pod list to return data ...
	I0731 21:29:10.914125 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:10.917224 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:10.917258 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:10.917272 1147232 node_conditions.go:105] duration metric: took 3.140281ms to run NodePressure ...
	I0731 21:29:10.917294 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:11.176463 1147232 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180506 1147232 kubeadm.go:739] kubelet initialised
	I0731 21:29:11.180529 1147232 kubeadm.go:740] duration metric: took 4.03724ms waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180540 1147232 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:11.185366 1147232 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:13.197693 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:11.594836 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:11.595339 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:11.595374 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:11.595293 1148436 retry.go:31] will retry after 4.520307648s: waiting for machine to come up
	I0731 21:29:17.633145 1148013 start.go:364] duration metric: took 1m51.491197772s to acquireMachinesLock for "default-k8s-diff-port-755535"
	I0731 21:29:17.633242 1148013 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:17.633255 1148013 fix.go:54] fixHost starting: 
	I0731 21:29:17.633764 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:17.633823 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:17.654593 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0731 21:29:17.655124 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:17.655734 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:17.655770 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:17.656109 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:17.656359 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:17.656530 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:17.658542 1148013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-755535: state=Stopped err=<nil>
	I0731 21:29:17.658585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	W0731 21:29:17.658784 1148013 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:17.660580 1148013 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-755535" ...
	I0731 21:29:16.120431 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120937 1147424 main.go:141] libmachine: (old-k8s-version-275462) Found IP for machine: 192.168.72.107
	I0731 21:29:16.120961 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has current primary IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120968 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserving static IP address...
	I0731 21:29:16.121466 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.121508 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserved static IP address: 192.168.72.107
	I0731 21:29:16.121528 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | skip adding static IP to network mk-old-k8s-version-275462 - found existing host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"}
	I0731 21:29:16.121561 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Getting to WaitForSSH function...
	I0731 21:29:16.121599 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting for SSH to be available...
	I0731 21:29:16.123460 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123825 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.123849 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123954 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH client type: external
	I0731 21:29:16.123988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa (-rw-------)
	I0731 21:29:16.124019 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:16.124034 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | About to run SSH command:
	I0731 21:29:16.124049 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | exit 0
	I0731 21:29:16.244331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:16.244741 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:29:16.245387 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.248072 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248502 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.248529 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248857 1147424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:29:16.249132 1147424 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:16.249162 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:16.249412 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.252283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252657 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.252687 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.253096 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253286 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253433 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.253606 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.253875 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.253895 1147424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:16.356702 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:16.356743 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357088 1147424 buildroot.go:166] provisioning hostname "old-k8s-version-275462"
	I0731 21:29:16.357116 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357303 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.361044 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361504 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.361540 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361801 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.362037 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362252 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362430 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.362618 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.362866 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.362884 1147424 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275462 && echo "old-k8s-version-275462" | sudo tee /etc/hostname
	I0731 21:29:16.478590 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275462
	
	I0731 21:29:16.478635 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.481767 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.482184 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482467 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.482716 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.482888 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.483083 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.483323 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.483529 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.483554 1147424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275462/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:16.597465 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:16.597515 1147424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:16.597549 1147424 buildroot.go:174] setting up certificates
	I0731 21:29:16.597563 1147424 provision.go:84] configureAuth start
	I0731 21:29:16.597578 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.597901 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.600943 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601347 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.601388 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601582 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.604296 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604757 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.604787 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604950 1147424 provision.go:143] copyHostCerts
	I0731 21:29:16.605019 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:16.605037 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:16.605108 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:16.605235 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:16.605249 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:16.605285 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:16.605370 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:16.605381 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:16.605407 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:16.605474 1147424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275462 san=[127.0.0.1 192.168.72.107 localhost minikube old-k8s-version-275462]
	I0731 21:29:16.959571 1147424 provision.go:177] copyRemoteCerts
	I0731 21:29:16.959637 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:16.959671 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.962543 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.962955 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.962988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.963253 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.963483 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.963690 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.963885 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.047050 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:17.072833 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 21:29:17.099214 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:17.125846 1147424 provision.go:87] duration metric: took 528.260173ms to configureAuth
	I0731 21:29:17.125892 1147424 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:17.126109 1147424 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:29:17.126194 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.129283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129568 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.129602 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129926 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.130232 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130458 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130601 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.130820 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.131002 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.131016 1147424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:17.395537 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:17.395569 1147424 machine.go:97] duration metric: took 1.146418308s to provisionDockerMachine
	I0731 21:29:17.395581 1147424 start.go:293] postStartSetup for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:29:17.395598 1147424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:17.395639 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.395987 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:17.396024 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.398916 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399233 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.399264 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.399674 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.399854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.400026 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.483331 1147424 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:17.487820 1147424 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:17.487856 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:17.487925 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:17.488012 1147424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:17.488186 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:17.499484 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:17.525699 1147424 start.go:296] duration metric: took 130.099417ms for postStartSetup
	I0731 21:29:17.525756 1147424 fix.go:56] duration metric: took 20.368597161s for fixHost
	I0731 21:29:17.525785 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.529040 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529525 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.529570 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.530095 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530310 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530481 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.530704 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.530879 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.530890 1147424 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:17.632991 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461357.608223429
	
	I0731 21:29:17.633011 1147424 fix.go:216] guest clock: 1722461357.608223429
	I0731 21:29:17.633018 1147424 fix.go:229] Guest: 2024-07-31 21:29:17.608223429 +0000 UTC Remote: 2024-07-31 21:29:17.525761122 +0000 UTC m=+242.704537445 (delta=82.462307ms)
	I0731 21:29:17.633040 1147424 fix.go:200] guest clock delta is within tolerance: 82.462307ms
	I0731 21:29:17.633045 1147424 start.go:83] releasing machines lock for "old-k8s-version-275462", held for 20.475925282s
	I0731 21:29:17.633069 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.633360 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:17.636188 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636565 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.636598 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636792 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637346 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637569 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637674 1147424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:17.637721 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.637831 1147424 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:17.637861 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.640574 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640772 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640966 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.640996 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641174 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641297 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.641331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641371 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641511 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641564 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641680 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641846 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641886 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.642184 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.716822 1147424 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:17.741404 1147424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:17.892700 1147424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:17.899143 1147424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:17.899252 1147424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:17.915997 1147424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:17.916032 1147424 start.go:495] detecting cgroup driver to use...
	I0731 21:29:17.916133 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:17.933847 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:17.948471 1147424 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:17.948565 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:17.963294 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:17.978417 1147424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:18.100521 1147424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:18.243022 1147424 docker.go:233] disabling docker service ...
	I0731 21:29:18.243104 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:18.258762 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:18.272012 1147424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:18.421137 1147424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:18.564600 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:18.581019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:18.601426 1147424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:29:18.601504 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.617312 1147424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:18.617400 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.631697 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.642487 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.654548 1147424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:18.666338 1147424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:18.676326 1147424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:18.676406 1147424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:18.690225 1147424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:18.702315 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:18.836795 1147424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:18.977840 1147424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:18.977930 1147424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:18.984979 1147424 start.go:563] Will wait 60s for crictl version
	I0731 21:29:18.985059 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:18.989654 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:19.033602 1147424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:19.033701 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.061583 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.093228 1147424 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:29:15.692077 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:18.191423 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:19.094804 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:19.098122 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.098620 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:19.098648 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.099016 1147424 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:19.103372 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:19.117035 1147424 kubeadm.go:883] updating cluster {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:19.117205 1147424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:29:19.117275 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:19.163252 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:19.163343 1147424 ssh_runner.go:195] Run: which lz4
	I0731 21:29:19.168173 1147424 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:19.172513 1147424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:19.172576 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:29:17.662009 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Start
	I0731 21:29:17.662245 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring networks are active...
	I0731 21:29:17.663121 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network default is active
	I0731 21:29:17.663583 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network mk-default-k8s-diff-port-755535 is active
	I0731 21:29:17.664059 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Getting domain xml...
	I0731 21:29:17.664837 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Creating domain...
	I0731 21:29:18.989801 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting to get IP...
	I0731 21:29:18.990936 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991376 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991428 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:18.991344 1148583 retry.go:31] will retry after 247.770384ms: waiting for machine to come up
	I0731 21:29:19.241063 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241658 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.241549 1148583 retry.go:31] will retry after 287.808437ms: waiting for machine to come up
	I0731 21:29:19.531237 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531849 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531875 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.531777 1148583 retry.go:31] will retry after 317.584035ms: waiting for machine to come up
	I0731 21:29:19.851691 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852167 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852202 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.852128 1148583 retry.go:31] will retry after 555.57435ms: waiting for machine to come up
	I0731 21:29:20.409812 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410356 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410392 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:20.410280 1148583 retry.go:31] will retry after 721.969177ms: waiting for machine to come up
	I0731 21:29:20.195383 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:20.703603 1147232 pod_ready.go:92] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.703634 1147232 pod_ready.go:81] duration metric: took 9.51823955s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.703649 1147232 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724000 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.724036 1147232 pod_ready.go:81] duration metric: took 20.374673ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724051 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732302 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.732326 1147232 pod_ready.go:81] duration metric: took 8.267565ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732340 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747581 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.747609 1147232 pod_ready.go:81] duration metric: took 2.015261928s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747619 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753322 1147232 pod_ready.go:92] pod "kube-proxy-m4www" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.753348 1147232 pod_ready.go:81] duration metric: took 5.72252ms for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753359 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758310 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.758335 1147232 pod_ready.go:81] duration metric: took 4.970124ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758346 1147232 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.731858 1147424 crio.go:462] duration metric: took 1.563734165s to copy over tarball
	I0731 21:29:20.732033 1147424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:23.813579 1147424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081445019s)
	I0731 21:29:23.813629 1147424 crio.go:469] duration metric: took 3.081657576s to extract the tarball
	I0731 21:29:23.813640 1147424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:23.855937 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:23.892640 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:23.892676 1147424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:23.892772 1147424 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.892797 1147424 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.892852 1147424 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.892776 1147424 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.893142 1147424 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.893240 1147424 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:29:23.893343 1147424 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.893348 1147424 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.894880 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.895111 1147424 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.894968 1147424 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.895194 1147424 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.895489 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.895587 1147424 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:29:24.036855 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.039761 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.042658 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.045088 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.045098 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.048688 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.088535 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:29:24.218808 1147424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:29:24.218845 1147424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:29:24.218881 1147424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:29:24.218918 1147424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.218930 1147424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:29:24.218936 1147424 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:29:24.218943 1147424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.218965 1147424 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.218978 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.219058 1147424 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:29:24.219078 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219079 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219084 1147424 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.219135 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238540 1147424 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:29:24.238602 1147424 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:29:24.238653 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238678 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.238697 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.238736 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.238794 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.238802 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.238851 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.366795 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:29:24.371307 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:29:24.371394 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:29:24.371436 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:29:24.371516 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:29:24.380026 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:29:24.380043 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:29:24.412112 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:29:24.523420 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:24.671943 1147424 cache_images.go:92] duration metric: took 779.240281ms to LoadCachedImages
	W0731 21:29:24.672078 1147424 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0731 21:29:24.672114 1147424 kubeadm.go:934] updating node { 192.168.72.107 8443 v1.20.0 crio true true} ...
	I0731 21:29:24.672267 1147424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:24.672897 1147424 ssh_runner.go:195] Run: crio config
	I0731 21:29:24.722662 1147424 cni.go:84] Creating CNI manager for ""
	I0731 21:29:24.722686 1147424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:24.722696 1147424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:24.722717 1147424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275462 NodeName:old-k8s-version-275462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:29:24.722892 1147424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275462"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:24.722962 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:29:24.733178 1147424 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:24.733273 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:24.743515 1147424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 21:29:24.760826 1147424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:24.779805 1147424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:29:24.798560 1147424 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:24.802406 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:24.815015 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:21.134251 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134731 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134764 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:21.134687 1148583 retry.go:31] will retry after 934.566416ms: waiting for machine to come up
	I0731 21:29:22.071038 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071631 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.071554 1148583 retry.go:31] will retry after 884.282326ms: waiting for machine to come up
	I0731 21:29:22.957241 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957617 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.957599 1148583 retry.go:31] will retry after 1.014946816s: waiting for machine to come up
	I0731 21:29:23.974435 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974845 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974883 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:23.974807 1148583 retry.go:31] will retry after 1.519800108s: waiting for machine to come up
	I0731 21:29:25.496770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497303 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:25.497249 1148583 retry.go:31] will retry after 1.739198883s: waiting for machine to come up
	I0731 21:29:24.767123 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:27.265952 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:29.266044 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:24.937628 1147424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:24.956917 1147424 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462 for IP: 192.168.72.107
	I0731 21:29:24.956949 1147424 certs.go:194] generating shared ca certs ...
	I0731 21:29:24.956972 1147424 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:24.957180 1147424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:24.957243 1147424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:24.957258 1147424 certs.go:256] generating profile certs ...
	I0731 21:29:24.957385 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key
	I0731 21:29:24.957468 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421
	I0731 21:29:24.957520 1147424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key
	I0731 21:29:24.957676 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:24.957719 1147424 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:24.957734 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:24.957770 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:24.957805 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:24.957837 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:24.957898 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:24.958772 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:24.998159 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:25.057520 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:25.098374 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:25.140601 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:29:25.187540 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:25.213821 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:25.240997 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:25.266970 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:25.292340 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:25.318838 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:25.344071 1147424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:25.361756 1147424 ssh_runner.go:195] Run: openssl version
	I0731 21:29:25.368009 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:25.379741 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.384975 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.385052 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.390894 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:25.403007 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:25.415067 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422223 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422310 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.429842 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:25.440874 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:25.451684 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456190 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456259 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.462311 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:25.474253 1147424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:25.479088 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:25.485188 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:25.491404 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:25.498223 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:25.504935 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:25.511202 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:25.517628 1147424 kubeadm.go:392] StartCluster: {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:25.517767 1147424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:25.517832 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.555145 1147424 cri.go:89] found id: ""
	I0731 21:29:25.555227 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:25.565732 1147424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:25.565758 1147424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:25.565821 1147424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:25.575700 1147424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:25.576730 1147424 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:25.577437 1147424 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-275462" cluster setting kubeconfig missing "old-k8s-version-275462" context setting]
	I0731 21:29:25.578357 1147424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:25.626975 1147424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:25.637707 1147424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.107
	I0731 21:29:25.637758 1147424 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:25.637773 1147424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:25.637826 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.674153 1147424 cri.go:89] found id: ""
	I0731 21:29:25.674240 1147424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:25.692354 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:25.703047 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:25.703081 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:25.703140 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:25.712766 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:25.712884 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:25.723121 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:25.732767 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:25.732846 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:25.743055 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.752622 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:25.752699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.763763 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:25.773620 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:25.773699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:25.784175 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:25.794182 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:25.908515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.676104 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.891081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.024837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.100397 1147424 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:27.100499 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.600582 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.101391 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.601068 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.101502 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.600838 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.239418 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239872 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:27.239806 1148583 retry.go:31] will retry after 1.907805681s: waiting for machine to come up
	I0731 21:29:29.149605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150022 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150049 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:29.149966 1148583 retry.go:31] will retry after 3.584697795s: waiting for machine to come up
	I0731 21:29:31.765270 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:34.264994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:30.101071 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:30.601377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.100907 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.600736 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.100741 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.601406 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.100616 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.601476 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.101619 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.601270 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.736055 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736539 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:32.736495 1148583 retry.go:31] will retry after 4.026783834s: waiting for machine to come up
	I0731 21:29:38.016998 1146656 start.go:364] duration metric: took 55.868098686s to acquireMachinesLock for "no-preload-018891"
	I0731 21:29:38.017060 1146656 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:38.017069 1146656 fix.go:54] fixHost starting: 
	I0731 21:29:38.017509 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:38.017552 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:38.036034 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0731 21:29:38.036681 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:38.037291 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:29:38.037319 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:38.037687 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:38.037920 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:38.038078 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:29:38.040079 1146656 fix.go:112] recreateIfNeeded on no-preload-018891: state=Stopped err=<nil>
	I0731 21:29:38.040133 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	W0731 21:29:38.040317 1146656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:38.042575 1146656 out.go:177] * Restarting existing kvm2 VM for "no-preload-018891" ...
	I0731 21:29:36.766344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:39.265931 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:36.767067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767688 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has current primary IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767744 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Found IP for machine: 192.168.39.145
	I0731 21:29:36.767774 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserving static IP address...
	I0731 21:29:36.768193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.768234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | skip adding static IP to network mk-default-k8s-diff-port-755535 - found existing host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"}
	I0731 21:29:36.768256 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserved static IP address: 192.168.39.145
	I0731 21:29:36.768277 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for SSH to be available...
	I0731 21:29:36.768292 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Getting to WaitForSSH function...
	I0731 21:29:36.770423 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.770710 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770880 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH client type: external
	I0731 21:29:36.770909 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa (-rw-------)
	I0731 21:29:36.770966 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:36.770989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | About to run SSH command:
	I0731 21:29:36.771004 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | exit 0
	I0731 21:29:36.892321 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:36.892633 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetConfigRaw
	I0731 21:29:36.893372 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:36.896249 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896647 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.896682 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896983 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:29:36.897231 1148013 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:36.897253 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:36.897507 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:36.900381 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900794 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.900832 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900940 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:36.901137 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901403 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:36.901591 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:36.901809 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:36.901823 1148013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:37.004424 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:37.004459 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004749 1148013 buildroot.go:166] provisioning hostname "default-k8s-diff-port-755535"
	I0731 21:29:37.004770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.007987 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008391 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.008439 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.008802 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.008981 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.009190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.009374 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.009588 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.009602 1148013 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-755535 && echo "default-k8s-diff-port-755535" | sudo tee /etc/hostname
	I0731 21:29:37.127160 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-755535
	
	I0731 21:29:37.127190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.130282 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130701 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.130737 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130924 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.131178 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131389 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131537 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.131778 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.132017 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.132037 1148013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-755535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-755535/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-755535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:37.245157 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:37.245201 1148013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:37.245255 1148013 buildroot.go:174] setting up certificates
	I0731 21:29:37.245268 1148013 provision.go:84] configureAuth start
	I0731 21:29:37.245283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.245628 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:37.248611 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.248910 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.248944 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.249109 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.251332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251698 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.251727 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251911 1148013 provision.go:143] copyHostCerts
	I0731 21:29:37.251973 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:37.251983 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:37.252036 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:37.252164 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:37.252173 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:37.252196 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:37.252258 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:37.252265 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:37.252283 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:37.252334 1148013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-755535 san=[127.0.0.1 192.168.39.145 default-k8s-diff-port-755535 localhost minikube]
	I0731 21:29:37.356985 1148013 provision.go:177] copyRemoteCerts
	I0731 21:29:37.357046 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:37.357077 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.359635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.359985 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.360014 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.360217 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.360421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.360670 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.360815 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.442709 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:37.467795 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 21:29:37.492389 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:29:37.515837 1148013 provision.go:87] duration metric: took 270.547831ms to configureAuth
	I0731 21:29:37.515882 1148013 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:37.516070 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:37.516200 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.519062 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519432 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.519469 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519695 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.519920 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520141 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520323 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.520481 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.520701 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.520726 1148013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:37.780006 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:37.780033 1148013 machine.go:97] duration metric: took 882.786941ms to provisionDockerMachine
	I0731 21:29:37.780047 1148013 start.go:293] postStartSetup for "default-k8s-diff-port-755535" (driver="kvm2")
	I0731 21:29:37.780059 1148013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:37.780081 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:37.780459 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:37.780493 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.783495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.783853 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.783886 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.784068 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.784322 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.784531 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.784714 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.866990 1148013 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:37.871294 1148013 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:37.871329 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:37.871408 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:37.871483 1148013 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:37.871584 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:37.881107 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:37.906964 1148013 start.go:296] duration metric: took 126.897843ms for postStartSetup
	I0731 21:29:37.907016 1148013 fix.go:56] duration metric: took 20.273760895s for fixHost
	I0731 21:29:37.907045 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.910120 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910452 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.910495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910747 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.910965 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911119 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911255 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.911448 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.911690 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.911705 1148013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:38.016788 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461377.990571620
	
	I0731 21:29:38.016818 1148013 fix.go:216] guest clock: 1722461377.990571620
	I0731 21:29:38.016830 1148013 fix.go:229] Guest: 2024-07-31 21:29:37.99057162 +0000 UTC Remote: 2024-07-31 21:29:37.907020915 +0000 UTC m=+131.913986687 (delta=83.550705ms)
	I0731 21:29:38.016876 1148013 fix.go:200] guest clock delta is within tolerance: 83.550705ms
	I0731 21:29:38.016883 1148013 start.go:83] releasing machines lock for "default-k8s-diff-port-755535", held for 20.383695886s
	I0731 21:29:38.016916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.017234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:38.019995 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020405 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.020436 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020641 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021180 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021387 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021485 1148013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:38.021536 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.021665 1148013 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:38.021693 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.024445 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024777 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024913 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.024946 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025214 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025258 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.025291 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025461 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025626 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.025640 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025915 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025907 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.026067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.026237 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.129588 1148013 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:38.135557 1148013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:38.276230 1148013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:38.281894 1148013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:38.281977 1148013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:38.298709 1148013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:38.298742 1148013 start.go:495] detecting cgroup driver to use...
	I0731 21:29:38.298815 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:38.316212 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:38.331845 1148013 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:38.331925 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:38.350284 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:38.365411 1148013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:38.502379 1148013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:38.659435 1148013 docker.go:233] disabling docker service ...
	I0731 21:29:38.659544 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:38.676451 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:38.692936 1148013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:38.843766 1148013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:38.974723 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:38.989514 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:39.009753 1148013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:29:39.009822 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.020785 1148013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:39.020857 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.031679 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.047024 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.061692 1148013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:39.072901 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.084049 1148013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.101694 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.118920 1148013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:39.128796 1148013 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:39.128869 1148013 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:39.143329 1148013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:39.153376 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:39.278414 1148013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:39.427377 1148013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:39.427493 1148013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:39.432178 1148013 start.go:563] Will wait 60s for crictl version
	I0731 21:29:39.432262 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:29:39.435949 1148013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:39.470366 1148013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:39.470494 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.498247 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.531071 1148013 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:29:35.101055 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:35.600782 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.101344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.600794 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.101402 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.601198 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.100947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.601332 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.101351 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.601319 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.532416 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:39.535677 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536015 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:39.536046 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536341 1148013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:39.540305 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:39.553333 1148013 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:39.553464 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:29:39.553514 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:39.592137 1148013 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:29:39.592216 1148013 ssh_runner.go:195] Run: which lz4
	I0731 21:29:39.596215 1148013 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:39.600203 1148013 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:39.600244 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:29:41.004825 1148013 crio.go:462] duration metric: took 1.408653613s to copy over tarball
	I0731 21:29:41.004930 1148013 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:38.043667 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Start
	I0731 21:29:38.043892 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring networks are active...
	I0731 21:29:38.044764 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network default is active
	I0731 21:29:38.045177 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network mk-no-preload-018891 is active
	I0731 21:29:38.045594 1146656 main.go:141] libmachine: (no-preload-018891) Getting domain xml...
	I0731 21:29:38.046459 1146656 main.go:141] libmachine: (no-preload-018891) Creating domain...
	I0731 21:29:39.353762 1146656 main.go:141] libmachine: (no-preload-018891) Waiting to get IP...
	I0731 21:29:39.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.355279 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.355383 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.355255 1148782 retry.go:31] will retry after 234.245005ms: waiting for machine to come up
	I0731 21:29:39.590814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.591332 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.591358 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.591270 1148782 retry.go:31] will retry after 362.949809ms: waiting for machine to come up
	I0731 21:29:39.956112 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.956694 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.956721 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.956639 1148782 retry.go:31] will retry after 469.324659ms: waiting for machine to come up
	I0731 21:29:40.427518 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.427997 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.428027 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.427953 1148782 retry.go:31] will retry after 463.172567ms: waiting for machine to come up
	I0731 21:29:40.893318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.893864 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.893890 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.893824 1148782 retry.go:31] will retry after 599.834904ms: waiting for machine to come up
	I0731 21:29:41.495844 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:41.496342 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:41.496372 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:41.496291 1148782 retry.go:31] will retry after 856.360903ms: waiting for machine to come up
	I0731 21:29:41.266267 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:43.267009 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:40.101530 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:40.601303 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.100720 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.600723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.100890 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.100765 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.101217 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.356436 1148013 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.351465263s)
	I0731 21:29:43.356470 1148013 crio.go:469] duration metric: took 2.351606996s to extract the tarball
	I0731 21:29:43.356479 1148013 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:43.397583 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:43.443757 1148013 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:43.443784 1148013 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:43.443793 1148013 kubeadm.go:934] updating node { 192.168.39.145 8444 v1.30.3 crio true true} ...
	I0731 21:29:43.443954 1148013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-755535 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:43.444026 1148013 ssh_runner.go:195] Run: crio config
	I0731 21:29:43.494935 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:43.494959 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:43.494973 1148013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:43.495006 1148013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-755535 NodeName:default-k8s-diff-port-755535 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:43.495210 1148013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-755535"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:43.495303 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:43.505057 1148013 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:43.505176 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:43.514741 1148013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 21:29:43.534865 1148013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:43.554763 1148013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 21:29:43.572433 1148013 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:43.577403 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:43.592858 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:43.737530 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:43.754632 1148013 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535 for IP: 192.168.39.145
	I0731 21:29:43.754662 1148013 certs.go:194] generating shared ca certs ...
	I0731 21:29:43.754686 1148013 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:43.754900 1148013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:43.754960 1148013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:43.754976 1148013 certs.go:256] generating profile certs ...
	I0731 21:29:43.755093 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/client.key
	I0731 21:29:43.755177 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key.22420a8f
	I0731 21:29:43.755227 1148013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key
	I0731 21:29:43.755381 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:43.755424 1148013 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:43.755434 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:43.755455 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:43.755480 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:43.755500 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:43.755539 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:43.756235 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:43.800725 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:43.835648 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:43.880032 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:43.915459 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 21:29:43.943694 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:43.968578 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:43.993192 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:29:44.017364 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:44.041303 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:44.065792 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:44.089991 1148013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:44.107888 1148013 ssh_runner.go:195] Run: openssl version
	I0731 21:29:44.113758 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:44.125576 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130648 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130727 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.137311 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:44.149135 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:44.160439 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165263 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165329 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.171250 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:44.182798 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:44.194037 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198577 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198658 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.204406 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:44.215573 1148013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:44.221587 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:44.229391 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:44.237371 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:44.244379 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:44.250414 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:44.256557 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:44.262804 1148013 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:44.262928 1148013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:44.262993 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.298720 1148013 cri.go:89] found id: ""
	I0731 21:29:44.298826 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:44.310173 1148013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:44.310199 1148013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:44.310258 1148013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:44.321273 1148013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:44.322769 1148013 kubeconfig.go:125] found "default-k8s-diff-port-755535" server: "https://192.168.39.145:8444"
	I0731 21:29:44.325832 1148013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:44.336366 1148013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.145
	I0731 21:29:44.336407 1148013 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:44.336427 1148013 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:44.336498 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.383500 1148013 cri.go:89] found id: ""
	I0731 21:29:44.383591 1148013 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:44.399444 1148013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:44.410687 1148013 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:44.410711 1148013 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:44.410769 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 21:29:44.420845 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:44.420925 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:44.430476 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 21:29:44.440198 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:44.440277 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:44.450195 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.459883 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:44.459966 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.470649 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 21:29:44.480689 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:44.480764 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:44.490628 1148013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:44.501343 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:44.642878 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.555233 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.766976 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.832896 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.907410 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:45.907508 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.354282 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:42.354765 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:42.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:42.354694 1148782 retry.go:31] will retry after 1.044468751s: waiting for machine to come up
	I0731 21:29:43.400835 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:43.401345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:43.401402 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:43.401318 1148782 retry.go:31] will retry after 935.157631ms: waiting for machine to come up
	I0731 21:29:44.337853 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:44.338472 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:44.338505 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:44.338397 1148782 retry.go:31] will retry after 1.530891122s: waiting for machine to come up
	I0731 21:29:45.871035 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:45.871693 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:45.871734 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:45.871617 1148782 retry.go:31] will retry after 1.996010352s: waiting for machine to come up
	I0731 21:29:45.765589 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:47.765743 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:45.100963 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:45.601355 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.101354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.601416 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.100953 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.601551 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.100775 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.601528 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.101362 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.601101 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.407820 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.907790 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.924949 1148013 api_server.go:72] duration metric: took 1.017537991s to wait for apiserver process to appear ...
	I0731 21:29:46.924989 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:46.925016 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:49.933387 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:49.933431 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:49.933448 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.002123 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.002156 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.425320 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.430430 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.430465 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.926039 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.931251 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.931286 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:51.425157 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:51.430486 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:29:51.437067 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:51.437115 1148013 api_server.go:131] duration metric: took 4.512116778s to wait for apiserver health ...
	I0731 21:29:51.437131 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:51.437142 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:51.438770 1148013 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:47.869470 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:47.869928 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:47.869960 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:47.869867 1148782 retry.go:31] will retry after 1.758316686s: waiting for machine to come up
	I0731 21:29:49.630515 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:49.631000 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:49.631036 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:49.630936 1148782 retry.go:31] will retry after 2.39654611s: waiting for machine to come up
	I0731 21:29:51.440057 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:51.460432 1148013 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:51.479629 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:51.491000 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:51.491059 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:51.491076 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:51.491087 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:51.491110 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:51.491124 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:51.491139 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:51.491150 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:51.491222 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:51.491236 1148013 system_pods.go:74] duration metric: took 11.579003ms to wait for pod list to return data ...
	I0731 21:29:51.491252 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:51.495163 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:51.495206 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:51.495239 1148013 node_conditions.go:105] duration metric: took 3.977024ms to run NodePressure ...
	I0731 21:29:51.495263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:51.762752 1148013 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768504 1148013 kubeadm.go:739] kubelet initialised
	I0731 21:29:51.768541 1148013 kubeadm.go:740] duration metric: took 5.756089ms waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768554 1148013 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:51.776242 1148013 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.783488 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783533 1148013 pod_ready.go:81] duration metric: took 7.250424ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.783547 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783558 1148013 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.790100 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790143 1148013 pod_ready.go:81] duration metric: took 6.573129ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.790159 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.797457 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797498 1148013 pod_ready.go:81] duration metric: took 7.319359ms for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.797513 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797533 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.883109 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883149 1148013 pod_ready.go:81] duration metric: took 85.605451ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.883162 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.283454 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283484 1148013 pod_ready.go:81] duration metric: took 400.306586ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.283495 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283511 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.682926 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682965 1148013 pod_ready.go:81] duration metric: took 399.442627ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.682982 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682991 1148013 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:53.083528 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083573 1148013 pod_ready.go:81] duration metric: took 400.571455ms for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:53.083590 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083601 1148013 pod_ready.go:38] duration metric: took 1.315033985s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:53.083623 1148013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:29:53.095349 1148013 ops.go:34] apiserver oom_adj: -16
	I0731 21:29:53.095379 1148013 kubeadm.go:597] duration metric: took 8.785172139s to restartPrimaryControlPlane
	I0731 21:29:53.095391 1148013 kubeadm.go:394] duration metric: took 8.832597905s to StartCluster
	I0731 21:29:53.095416 1148013 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.095513 1148013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:53.097384 1148013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.097693 1148013 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:29:53.097768 1148013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:29:53.097863 1148013 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097905 1148013 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.097914 1148013 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:29:53.097918 1148013 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097949 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.097956 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:53.097964 1148013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-755535"
	I0731 21:29:53.097960 1148013 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.098052 1148013 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.098070 1148013 addons.go:243] addon metrics-server should already be in state true
	I0731 21:29:53.098129 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.098364 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098389 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098405 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098465 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098544 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098578 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.099612 1148013 out.go:177] * Verifying Kubernetes components...
	I0731 21:29:53.100943 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:53.116043 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0731 21:29:53.116121 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0731 21:29:53.116663 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.116670 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.117278 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117297 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117558 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117575 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117662 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.118320 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.118358 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.118788 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0731 21:29:53.118820 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.119468 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.119498 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.119509 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.120181 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.120208 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.120626 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.120828 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.125024 1148013 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.125051 1148013 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:29:53.125087 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.125470 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.125510 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.136521 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0731 21:29:53.137246 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.137866 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.137907 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.138331 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.138574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.140269 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0731 21:29:53.140615 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.140722 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.141377 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.141402 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.141846 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.142108 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.142832 1148013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:53.143979 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0731 21:29:53.144037 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.144302 1148013 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.144321 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:29:53.144342 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.145270 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.145539 1148013 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:29:49.766048 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:52.266842 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:53.145875 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.145898 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.146651 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.146842 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:29:53.146863 1148013 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:29:53.146891 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.147198 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.147235 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.148082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149156 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.149247 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149438 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.149635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.149758 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.149890 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.150082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150593 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.150624 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150825 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.151024 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.151193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.151423 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.164594 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0731 21:29:53.165088 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.165634 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.165649 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.165919 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.166093 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.167775 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.168002 1148013 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.168016 1148013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:29:53.168032 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.171696 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172236 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.172266 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172492 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.172717 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.172890 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.173081 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.313528 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:53.332410 1148013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:29:53.467443 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.481915 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:29:53.481943 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:29:53.503095 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.524005 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:29:53.524039 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:29:53.577476 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:53.577511 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:29:53.630711 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:54.451991 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452029 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452078 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452115 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452387 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452404 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452412 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452526 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452551 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452565 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452574 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452582 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452667 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452684 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452691 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452849 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452869 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.458865 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.458888 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.459191 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.459208 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472307 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472337 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.472690 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.472706 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.472713 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472733 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472742 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.473021 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.473070 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.473074 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.473086 1148013 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-755535"
	I0731 21:29:54.474920 1148013 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:29:50.101380 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:50.601347 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.101325 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.601381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.101364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.600852 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.101284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.601020 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.101330 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.601310 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.476085 1148013 addons.go:510] duration metric: took 1.378326564s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:29:55.338873 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.029262 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:52.029780 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:52.029807 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:52.029695 1148782 retry.go:31] will retry after 2.74211918s: waiting for machine to come up
	I0731 21:29:54.773318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.773762 1146656 main.go:141] libmachine: (no-preload-018891) Found IP for machine: 192.168.61.246
	I0731 21:29:54.773788 1146656 main.go:141] libmachine: (no-preload-018891) Reserving static IP address...
	I0731 21:29:54.773803 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has current primary IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.774221 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.774260 1146656 main.go:141] libmachine: (no-preload-018891) DBG | skip adding static IP to network mk-no-preload-018891 - found existing host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"}
	I0731 21:29:54.774275 1146656 main.go:141] libmachine: (no-preload-018891) Reserved static IP address: 192.168.61.246
	I0731 21:29:54.774320 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Getting to WaitForSSH function...
	I0731 21:29:54.774343 1146656 main.go:141] libmachine: (no-preload-018891) Waiting for SSH to be available...
	I0731 21:29:54.776952 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.777352 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777426 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH client type: external
	I0731 21:29:54.777466 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa (-rw-------)
	I0731 21:29:54.777506 1146656 main.go:141] libmachine: (no-preload-018891) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:54.777522 1146656 main.go:141] libmachine: (no-preload-018891) DBG | About to run SSH command:
	I0731 21:29:54.777564 1146656 main.go:141] libmachine: (no-preload-018891) DBG | exit 0
	I0731 21:29:54.908253 1146656 main.go:141] libmachine: (no-preload-018891) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:54.908614 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetConfigRaw
	I0731 21:29:54.909339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:54.911937 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.912345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912621 1146656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/config.json ...
	I0731 21:29:54.912837 1146656 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:54.912858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:54.913092 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:54.915328 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915698 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.915725 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915862 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:54.916060 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916209 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916385 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:54.916563 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:54.916797 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:54.916812 1146656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:55.032674 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:55.032715 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033152 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:29:55.033189 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033429 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.036142 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036488 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.036553 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036710 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.036938 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037373 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.037586 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.037851 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.037869 1146656 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-018891 && echo "no-preload-018891" | sudo tee /etc/hostname
	I0731 21:29:55.170895 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-018891
	
	I0731 21:29:55.170923 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.174018 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174357 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.174382 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174594 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.174835 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175025 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175153 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.175333 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.175578 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.175595 1146656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018891/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:55.296570 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:55.296606 1146656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:55.296634 1146656 buildroot.go:174] setting up certificates
	I0731 21:29:55.296645 1146656 provision.go:84] configureAuth start
	I0731 21:29:55.296658 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.297022 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:55.299891 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300300 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.300329 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300525 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.302808 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303146 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.303176 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303306 1146656 provision.go:143] copyHostCerts
	I0731 21:29:55.303365 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:55.303375 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:55.303430 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:55.303533 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:55.303541 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:55.303565 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:55.303638 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:55.303645 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:55.303662 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:55.303773 1146656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.no-preload-018891 san=[127.0.0.1 192.168.61.246 localhost minikube no-preload-018891]
	I0731 21:29:55.451740 1146656 provision.go:177] copyRemoteCerts
	I0731 21:29:55.451822 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:55.451858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.454972 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455327 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.455362 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455522 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.455783 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.455966 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.456166 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.541939 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:29:55.567967 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:55.593630 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:55.621511 1146656 provision.go:87] duration metric: took 324.845258ms to configureAuth
	I0731 21:29:55.621546 1146656 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:55.621737 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:29:55.621823 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.624639 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625021 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.625054 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625277 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.625515 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625755 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625921 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.626150 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.626404 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.626428 1146656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:55.896753 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:55.896785 1146656 machine.go:97] duration metric: took 983.934543ms to provisionDockerMachine
	I0731 21:29:55.896799 1146656 start.go:293] postStartSetup for "no-preload-018891" (driver="kvm2")
	I0731 21:29:55.896818 1146656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:55.896863 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:55.897196 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:55.897229 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.899769 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900156 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.900190 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.900612 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.900765 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.900903 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.987436 1146656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:55.991924 1146656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:55.991958 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:55.992027 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:55.992144 1146656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:55.992312 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:56.002524 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:56.026998 1146656 start.go:296] duration metric: took 130.182157ms for postStartSetup
	I0731 21:29:56.027046 1146656 fix.go:56] duration metric: took 18.009977848s for fixHost
	I0731 21:29:56.027071 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.029907 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030303 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.030324 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030493 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.030731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.030907 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.031055 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.031254 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:56.031490 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:56.031503 1146656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:56.149163 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461396.115095611
	
	I0731 21:29:56.149199 1146656 fix.go:216] guest clock: 1722461396.115095611
	I0731 21:29:56.149211 1146656 fix.go:229] Guest: 2024-07-31 21:29:56.115095611 +0000 UTC Remote: 2024-07-31 21:29:56.027049922 +0000 UTC m=+369.298206393 (delta=88.045689ms)
	I0731 21:29:56.149267 1146656 fix.go:200] guest clock delta is within tolerance: 88.045689ms
	I0731 21:29:56.149294 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 18.13224564s
	I0731 21:29:56.149320 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.149597 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:56.152941 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153307 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.153359 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153492 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154130 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154353 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154450 1146656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:56.154497 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.154650 1146656 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:56.154678 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.157376 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157795 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157838 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.157858 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158006 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158227 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158396 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.158422 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.158421 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158568 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158646 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.158731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158879 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.159051 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.241170 1146656 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:56.259519 1146656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:56.414823 1146656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:56.420732 1146656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:56.420805 1146656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:56.438423 1146656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:56.438461 1146656 start.go:495] detecting cgroup driver to use...
	I0731 21:29:56.438567 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:56.456069 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:56.471320 1146656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:56.471399 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:56.486206 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:56.501601 1146656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:56.623367 1146656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:56.774879 1146656 docker.go:233] disabling docker service ...
	I0731 21:29:56.774969 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:56.792295 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:56.809957 1146656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:56.961634 1146656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:57.102957 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:57.118907 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:57.139231 1146656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:29:57.139301 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.150471 1146656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:57.150547 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.160951 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.171556 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.182777 1146656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:57.196310 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.209689 1146656 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.227660 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.238058 1146656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:57.248326 1146656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:57.248388 1146656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:57.261076 1146656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:57.272002 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:57.406445 1146656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:57.540657 1146656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:57.540765 1146656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:57.546161 1146656 start.go:563] Will wait 60s for crictl version
	I0731 21:29:57.546233 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.550021 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:57.589152 1146656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:57.589272 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.618944 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.650646 1146656 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:29:54.766019 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:57.264179 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:59.264724 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:55.101321 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:55.600950 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.100785 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.601322 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.101431 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.101425 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.600958 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.100876 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.601349 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.837038 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.336837 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.836595 1148013 node_ready.go:49] node "default-k8s-diff-port-755535" has status "Ready":"True"
	I0731 21:30:00.836632 1148013 node_ready.go:38] duration metric: took 7.504184626s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:30:00.836644 1148013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:00.841523 1148013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846346 1148013 pod_ready.go:92] pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.846372 1148013 pod_ready.go:81] duration metric: took 4.815855ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846383 1148013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851118 1148013 pod_ready.go:92] pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.851140 1148013 pod_ready.go:81] duration metric: took 4.751019ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851151 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:57.651874 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:57.655070 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655529 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:57.655572 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655778 1146656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:57.659917 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:57.673863 1146656 kubeadm.go:883] updating cluster {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:57.674037 1146656 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:29:57.674099 1146656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:57.714187 1146656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:29:57.714225 1146656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:57.714285 1146656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.714317 1146656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.714345 1146656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.714370 1146656 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.714378 1146656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.714348 1146656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.714420 1146656 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.714458 1146656 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716109 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.716123 1146656 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.716147 1146656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.716161 1146656 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716168 1146656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.716119 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.716527 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.716549 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.848967 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.869777 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.881111 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 21:29:57.888022 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.892714 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.893611 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.908421 1146656 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 21:29:57.908493 1146656 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.908554 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.914040 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.985691 1146656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 21:29:57.985757 1146656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.985814 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.128813 1146656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 21:29:58.128930 1146656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.128947 1146656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 21:29:58.128996 1146656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.129046 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129061 1146656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 21:29:58.129088 1146656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.129115 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129000 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129194 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:58.129262 1146656 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 21:29:58.129309 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:58.129312 1146656 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.129389 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.141411 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.141477 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.212758 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 21:29:58.212783 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212847 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.212860 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.212928 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212933 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:29:58.226942 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227020 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.227057 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227113 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.265352 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 21:29:58.265470 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:29:58.276064 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276115 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276128 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 21:29:58.276150 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276176 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276186 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276213 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 21:29:58.276248 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.276359 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.280583 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 21:29:58.363934 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.050742 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.774531298s)
	I0731 21:30:01.050793 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 21:30:01.050832 1146656 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050926 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050839 1146656 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.686857972s)
	I0731 21:30:01.051031 1146656 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 21:30:01.051073 1146656 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.051118 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:30:01.266241 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:03.764462 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:00.101336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:00.601036 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.101381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.601371 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.100649 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.601354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.101316 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.101099 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.601146 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.860276 1148013 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:04.360452 1148013 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.360479 1148013 pod_ready.go:81] duration metric: took 3.509320908s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.360496 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367733 1148013 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.367757 1148013 pod_ready.go:81] duration metric: took 7.253266ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367768 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372693 1148013 pod_ready.go:92] pod "kube-proxy-mqcmt" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.372719 1148013 pod_ready.go:81] duration metric: took 4.944626ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372728 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436318 1148013 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.436345 1148013 pod_ready.go:81] duration metric: took 63.609569ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436356 1148013 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.339084 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.288125508s)
	I0731 21:30:04.339126 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 21:30:04.339141 1146656 ssh_runner.go:235] Completed: which crictl: (3.288000381s)
	I0731 21:30:04.339164 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:04.339223 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:04.339234 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:06.225796 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886536121s)
	I0731 21:30:06.225852 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 21:30:06.225875 1146656 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.886627424s)
	I0731 21:30:06.225900 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.225933 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 21:30:06.225987 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.226038 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:05.764555 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:07.766002 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:05.100624 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:05.600680 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.101286 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.601308 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.100801 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.600703 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.101252 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.601341 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.101049 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.601284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.443235 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.444797 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.950200 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.198750 1146656 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972673111s)
	I0731 21:30:08.198802 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 21:30:08.198831 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.972821334s)
	I0731 21:30:08.198850 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 21:30:08.198878 1146656 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:08.198956 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:10.054141 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.855149734s)
	I0731 21:30:10.054181 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 21:30:10.054209 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:10.054263 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:11.506212 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.45191421s)
	I0731 21:30:11.506252 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 21:30:11.506294 1146656 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:11.506390 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:10.263896 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.264903 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:14.265574 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.100825 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:10.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.101377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.601357 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.100679 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.600724 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.101278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.600992 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.101359 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.601364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.443063 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:15.443624 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.356725 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 21:30:12.356768 1146656 cache_images.go:123] Successfully loaded all cached images
	I0731 21:30:12.356773 1146656 cache_images.go:92] duration metric: took 14.642536081s to LoadCachedImages
	I0731 21:30:12.356786 1146656 kubeadm.go:934] updating node { 192.168.61.246 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:30:12.356931 1146656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-018891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:30:12.357036 1146656 ssh_runner.go:195] Run: crio config
	I0731 21:30:12.404684 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:12.404711 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:12.404728 1146656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:30:12.404752 1146656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.246 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018891 NodeName:no-preload-018891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:30:12.404917 1146656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018891"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:30:12.404999 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:30:12.416421 1146656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:30:12.416516 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:30:12.426572 1146656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 21:30:12.444613 1146656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:30:12.461161 1146656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 21:30:12.478872 1146656 ssh_runner.go:195] Run: grep 192.168.61.246	control-plane.minikube.internal$ /etc/hosts
	I0731 21:30:12.482736 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:30:12.502603 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:12.617670 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:12.634477 1146656 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891 for IP: 192.168.61.246
	I0731 21:30:12.634508 1146656 certs.go:194] generating shared ca certs ...
	I0731 21:30:12.634532 1146656 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:12.634740 1146656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:30:12.634799 1146656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:30:12.634813 1146656 certs.go:256] generating profile certs ...
	I0731 21:30:12.634961 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.key
	I0731 21:30:12.635052 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key.54e88c10
	I0731 21:30:12.635108 1146656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key
	I0731 21:30:12.635312 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:30:12.635379 1146656 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:30:12.635394 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:30:12.635433 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:30:12.635465 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:30:12.635500 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:30:12.635557 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:30:12.636406 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:30:12.672156 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:30:12.702346 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:30:12.731602 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:30:12.777601 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:30:12.813409 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:30:12.841076 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:30:12.866418 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:30:12.890716 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:30:12.915792 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:30:12.940826 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:30:12.966374 1146656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:30:12.984533 1146656 ssh_runner.go:195] Run: openssl version
	I0731 21:30:12.990538 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:30:13.002053 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006781 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006862 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.012728 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:30:13.024167 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:30:13.035617 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040041 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040150 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.046193 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:30:13.058141 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:30:13.070085 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074720 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074811 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.080498 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:30:13.092497 1146656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:30:13.097275 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:30:13.103762 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:30:13.110267 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:30:13.118325 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:30:13.124784 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:30:13.131502 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:30:13.138736 1146656 kubeadm.go:392] StartCluster: {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:30:13.138837 1146656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:30:13.138888 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.178222 1146656 cri.go:89] found id: ""
	I0731 21:30:13.178304 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:30:13.188552 1146656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:30:13.188580 1146656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:30:13.188634 1146656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:30:13.198424 1146656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:30:13.199620 1146656 kubeconfig.go:125] found "no-preload-018891" server: "https://192.168.61.246:8443"
	I0731 21:30:13.202067 1146656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:30:13.213244 1146656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.246
	I0731 21:30:13.213286 1146656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:30:13.213303 1146656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:30:13.213719 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.253396 1146656 cri.go:89] found id: ""
	I0731 21:30:13.253478 1146656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:30:13.270269 1146656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:30:13.280405 1146656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:30:13.280431 1146656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:30:13.280479 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:30:13.289979 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:30:13.290047 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:30:13.299871 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:30:13.309257 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:30:13.309342 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:30:13.319593 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.329418 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:30:13.329486 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.339419 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:30:13.348971 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:30:13.349036 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:30:13.358887 1146656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:30:13.368643 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:13.485786 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.401198 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.599529 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.677307 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.765353 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:30:14.765468 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.266329 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.766054 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.786157 1146656 api_server.go:72] duration metric: took 1.020803565s to wait for apiserver process to appear ...
	I0731 21:30:15.786189 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:30:15.786217 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:16.265710 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.766148 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.439856 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.439896 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.439914 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.492649 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.492690 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.787081 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.810263 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:18.810302 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.286734 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.291964 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:19.291999 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.786505 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.796699 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:30:19.807525 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:30:19.807566 1146656 api_server.go:131] duration metric: took 4.02136792s to wait for apiserver health ...
	I0731 21:30:19.807579 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:19.807588 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:19.809353 1146656 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:30:15.101218 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.600733 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.101137 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.601585 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.101343 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.101295 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.601307 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.100682 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.601155 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.942857 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.943771 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.810433 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:30:19.821002 1146656 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:30:19.868402 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:30:19.883129 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:30:19.883180 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:30:19.883192 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:30:19.883204 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:30:19.883212 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:30:19.883221 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:30:19.883229 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:30:19.883242 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:30:19.883261 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:30:19.883274 1146656 system_pods.go:74] duration metric: took 14.843323ms to wait for pod list to return data ...
	I0731 21:30:19.883284 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:30:19.897327 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:30:19.897368 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:30:19.897382 1146656 node_conditions.go:105] duration metric: took 14.091172ms to run NodePressure ...
	I0731 21:30:19.897407 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:20.196896 1146656 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:30:20.202966 1146656 kubeadm.go:739] kubelet initialised
	I0731 21:30:20.202990 1146656 kubeadm.go:740] duration metric: took 6.059782ms waiting for restarted kubelet to initialise ...
	I0731 21:30:20.203000 1146656 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:20.208123 1146656 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.214186 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214236 1146656 pod_ready.go:81] duration metric: took 6.07909ms for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.214247 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214253 1146656 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.220223 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220256 1146656 pod_ready.go:81] duration metric: took 5.988701ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.220267 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220273 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.228507 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228536 1146656 pod_ready.go:81] duration metric: took 8.255655ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.228545 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228553 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.272704 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272743 1146656 pod_ready.go:81] duration metric: took 44.182664ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.272755 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272777 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.673129 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673158 1146656 pod_ready.go:81] duration metric: took 400.361902ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.673170 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673177 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.072429 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072460 1146656 pod_ready.go:81] duration metric: took 399.27644ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.072471 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072478 1146656 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.472593 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472626 1146656 pod_ready.go:81] duration metric: took 400.13982ms for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.472637 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472645 1146656 pod_ready.go:38] duration metric: took 1.26963694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:21.472664 1146656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:30:21.484323 1146656 ops.go:34] apiserver oom_adj: -16
	I0731 21:30:21.484351 1146656 kubeadm.go:597] duration metric: took 8.295763074s to restartPrimaryControlPlane
	I0731 21:30:21.484361 1146656 kubeadm.go:394] duration metric: took 8.34563439s to StartCluster
	I0731 21:30:21.484379 1146656 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.484460 1146656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:30:21.486137 1146656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.486409 1146656 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:30:21.486485 1146656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:30:21.486584 1146656 addons.go:69] Setting storage-provisioner=true in profile "no-preload-018891"
	I0731 21:30:21.486615 1146656 addons.go:234] Setting addon storage-provisioner=true in "no-preload-018891"
	I0731 21:30:21.486646 1146656 addons.go:69] Setting metrics-server=true in profile "no-preload-018891"
	I0731 21:30:21.486692 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:30:21.486707 1146656 addons.go:234] Setting addon metrics-server=true in "no-preload-018891"
	W0731 21:30:21.486718 1146656 addons.go:243] addon metrics-server should already be in state true
	I0731 21:30:21.486759 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	W0731 21:30:21.486664 1146656 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:30:21.486850 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.486615 1146656 addons.go:69] Setting default-storageclass=true in profile "no-preload-018891"
	I0731 21:30:21.486954 1146656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018891"
	I0731 21:30:21.487107 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487150 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487230 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487267 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487371 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487406 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.488066 1146656 out.go:177] * Verifying Kubernetes components...
	I0731 21:30:21.489491 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:21.503876 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0731 21:30:21.504017 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0731 21:30:21.504086 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0731 21:30:21.504598 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504642 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504682 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.505173 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505193 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505199 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505213 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505305 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505327 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505554 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505629 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505639 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505831 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.506154 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506164 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.508914 1146656 addons.go:234] Setting addon default-storageclass=true in "no-preload-018891"
	W0731 21:30:21.508932 1146656 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:30:21.508957 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.509187 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.509213 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.526066 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0731 21:30:21.528731 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.529285 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.529311 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.529784 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.530000 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.532450 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.534700 1146656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:21.536115 1146656 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.536141 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:30:21.536170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.540044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540592 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.540622 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540851 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.541104 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.541270 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.541425 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.547128 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0731 21:30:21.547184 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0731 21:30:21.547786 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.547865 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.548426 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548445 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548429 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548466 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548780 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548845 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548959 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.549425 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.549473 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.551116 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.553068 1146656 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:30:21.554401 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:30:21.554418 1146656 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:30:21.554445 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.557987 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558385 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.558410 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558728 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.558976 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.559164 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.559326 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.569320 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0731 21:30:21.569956 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.570511 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.570534 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.571119 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.571339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.573316 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.573563 1146656 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.573585 1146656 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:30:21.573604 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.576643 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577012 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.577044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577214 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.577511 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.577688 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.577849 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.700050 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:21.717247 1146656 node_ready.go:35] waiting up to 6m0s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:21.798175 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.818043 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:30:21.818078 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:30:21.823805 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.862781 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:30:21.862812 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:30:21.898427 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:21.898457 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:30:21.948766 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:23.027256 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.229032744s)
	I0731 21:30:23.027318 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027322 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203467073s)
	I0731 21:30:23.027367 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027401 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.078593532s)
	I0731 21:30:23.027335 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027442 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027459 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027708 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027714 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027723 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027728 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027732 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027738 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027740 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027746 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027794 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027808 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027818 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.027827 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027991 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028003 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028037 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028056 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028061 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028071 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028081 1146656 addons.go:475] Verifying addon metrics-server=true in "no-preload-018891"
	I0731 21:30:23.028084 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.028119 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.034930 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.034965 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.035312 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.035333 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.035346 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.037042 1146656 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0731 21:30:21.264247 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.264691 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:20.100856 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:20.601336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.101059 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.100791 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.601360 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.600731 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.601285 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.945141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:24.442664 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.038375 1146656 addons.go:510] duration metric: took 1.551892195s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0731 21:30:23.721386 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.721450 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.264972 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:27.266151 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:25.101043 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:25.601045 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.101312 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.600559 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:27.100884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:27.100987 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:27.138104 1147424 cri.go:89] found id: ""
	I0731 21:30:27.138142 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.138154 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:27.138163 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:27.138233 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:27.175030 1147424 cri.go:89] found id: ""
	I0731 21:30:27.175068 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.175080 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:27.175088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:27.175158 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:27.209891 1147424 cri.go:89] found id: ""
	I0731 21:30:27.209925 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.209934 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:27.209941 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:27.209992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:27.247117 1147424 cri.go:89] found id: ""
	I0731 21:30:27.247154 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.247163 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:27.247170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:27.247236 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:27.286595 1147424 cri.go:89] found id: ""
	I0731 21:30:27.286625 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.286633 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:27.286639 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:27.286695 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:27.321169 1147424 cri.go:89] found id: ""
	I0731 21:30:27.321201 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.321218 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:27.321226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:27.321310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:27.356278 1147424 cri.go:89] found id: ""
	I0731 21:30:27.356306 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.356317 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:27.356323 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:27.356386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:27.390351 1147424 cri.go:89] found id: ""
	I0731 21:30:27.390378 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.390387 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:27.390398 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:27.390412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:27.440412 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:27.440451 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:27.454295 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:27.454330 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:27.575971 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:27.575999 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:27.576018 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:27.639090 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:27.639141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:26.442847 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.943311 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.221333 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:29.221116 1146656 node_ready.go:49] node "no-preload-018891" has status "Ready":"True"
	I0731 21:30:29.221150 1146656 node_ready.go:38] duration metric: took 7.50385465s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:29.221161 1146656 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:29.226655 1146656 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:31.233713 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:29.764835 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:31.764914 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:34.264305 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:30.177467 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:30.191103 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:30.191179 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:30.226529 1147424 cri.go:89] found id: ""
	I0731 21:30:30.226575 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.226584 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:30.226591 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:30.226653 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:30.262162 1147424 cri.go:89] found id: ""
	I0731 21:30:30.262193 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.262202 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:30.262209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:30.262275 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:30.301663 1147424 cri.go:89] found id: ""
	I0731 21:30:30.301698 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.301706 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:30.301713 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:30.301769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:30.342073 1147424 cri.go:89] found id: ""
	I0731 21:30:30.342105 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.342117 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:30.342125 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:30.342199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:30.375980 1147424 cri.go:89] found id: ""
	I0731 21:30:30.376013 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.376024 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:30.376033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:30.376114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:30.409852 1147424 cri.go:89] found id: ""
	I0731 21:30:30.409892 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.409900 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:30.409907 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:30.409960 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:30.444551 1147424 cri.go:89] found id: ""
	I0731 21:30:30.444592 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.444604 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:30.444612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:30.444672 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:30.481953 1147424 cri.go:89] found id: ""
	I0731 21:30:30.481987 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.481995 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:30.482006 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:30.482024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:30.533740 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:30.533785 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:30.546789 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:30.546831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:30.622294 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:30.622321 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:30.622338 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:30.693871 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:30.693922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.236318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:33.249452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:33.249545 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:33.288064 1147424 cri.go:89] found id: ""
	I0731 21:30:33.288110 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.288124 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:33.288133 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:33.288208 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:33.321269 1147424 cri.go:89] found id: ""
	I0731 21:30:33.321298 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.321307 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:33.321313 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:33.321368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:33.357078 1147424 cri.go:89] found id: ""
	I0731 21:30:33.357125 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.357133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:33.357140 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:33.357206 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:33.393556 1147424 cri.go:89] found id: ""
	I0731 21:30:33.393587 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.393598 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:33.393608 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:33.393674 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:33.427311 1147424 cri.go:89] found id: ""
	I0731 21:30:33.427347 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.427359 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:33.427368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:33.427438 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:33.462424 1147424 cri.go:89] found id: ""
	I0731 21:30:33.462463 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.462474 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:33.462482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:33.462557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:33.499271 1147424 cri.go:89] found id: ""
	I0731 21:30:33.499302 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.499311 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:33.499320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:33.499395 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:33.536341 1147424 cri.go:89] found id: ""
	I0731 21:30:33.536372 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.536382 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:33.536392 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:33.536406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:33.606582 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:33.606621 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:33.606640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:33.682704 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:33.682757 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.722410 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:33.722456 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:33.778845 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:33.778888 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:31.442470 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.443996 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:35.944317 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.735206 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.234503 1146656 pod_ready.go:92] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.234535 1146656 pod_ready.go:81] duration metric: took 7.007846047s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.234557 1146656 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240361 1146656 pod_ready.go:92] pod "etcd-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.240396 1146656 pod_ready.go:81] duration metric: took 5.830601ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240410 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246667 1146656 pod_ready.go:92] pod "kube-apiserver-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.246697 1146656 pod_ready.go:81] duration metric: took 6.278754ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246707 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252616 1146656 pod_ready.go:92] pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.252646 1146656 pod_ready.go:81] duration metric: took 5.931893ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252657 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257929 1146656 pod_ready.go:92] pod "kube-proxy-x2dnn" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.257962 1146656 pod_ready.go:81] duration metric: took 5.298921ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257976 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632686 1146656 pod_ready.go:92] pod "kube-scheduler-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.632723 1146656 pod_ready.go:81] duration metric: took 374.739035ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632737 1146656 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.265196 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.265807 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.293569 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:36.311120 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:36.311235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:36.350558 1147424 cri.go:89] found id: ""
	I0731 21:30:36.350589 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.350596 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:36.350602 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:36.350655 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:36.387804 1147424 cri.go:89] found id: ""
	I0731 21:30:36.387841 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.387849 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:36.387855 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:36.387912 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:36.427225 1147424 cri.go:89] found id: ""
	I0731 21:30:36.427263 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.427273 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:36.427280 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:36.427367 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:36.470864 1147424 cri.go:89] found id: ""
	I0731 21:30:36.470896 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.470908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:36.470917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:36.470985 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:36.523075 1147424 cri.go:89] found id: ""
	I0731 21:30:36.523109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.523117 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:36.523124 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:36.523188 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:36.598071 1147424 cri.go:89] found id: ""
	I0731 21:30:36.598109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.598120 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:36.598129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:36.598200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:36.638277 1147424 cri.go:89] found id: ""
	I0731 21:30:36.638314 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.638326 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:36.638335 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:36.638402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:36.673112 1147424 cri.go:89] found id: ""
	I0731 21:30:36.673152 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.673164 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:36.673180 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:36.673197 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:36.728197 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:36.728245 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:36.742034 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:36.742072 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:36.815584 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:36.815617 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:36.815635 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:36.894418 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:36.894464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.436637 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:39.449708 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:39.449823 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:39.490244 1147424 cri.go:89] found id: ""
	I0731 21:30:39.490281 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.490293 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:39.490301 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:39.490365 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:39.523568 1147424 cri.go:89] found id: ""
	I0731 21:30:39.523601 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.523625 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:39.523640 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:39.523723 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:39.558966 1147424 cri.go:89] found id: ""
	I0731 21:30:39.559004 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.559017 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:39.559025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:39.559092 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:39.592002 1147424 cri.go:89] found id: ""
	I0731 21:30:39.592037 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.592049 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:39.592058 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:39.592145 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:39.624596 1147424 cri.go:89] found id: ""
	I0731 21:30:39.624634 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.624646 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:39.624655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:39.624722 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:39.658928 1147424 cri.go:89] found id: ""
	I0731 21:30:39.658957 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.658965 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:39.658973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:39.659024 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:39.692725 1147424 cri.go:89] found id: ""
	I0731 21:30:39.692766 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.692779 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:39.692788 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:39.692857 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:39.728770 1147424 cri.go:89] found id: ""
	I0731 21:30:39.728811 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.728823 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:39.728837 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:39.728854 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:39.799162 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:39.799193 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:39.799213 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:38.443560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.942937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.638956 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.640407 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.764748 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.765335 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:39.884581 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:39.884625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.923650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:39.923687 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:39.977735 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:39.977787 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.491668 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:42.513530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:42.513623 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:42.563932 1147424 cri.go:89] found id: ""
	I0731 21:30:42.563968 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.563982 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:42.563991 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:42.564067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:42.598089 1147424 cri.go:89] found id: ""
	I0731 21:30:42.598122 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.598131 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:42.598138 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:42.598199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:42.631493 1147424 cri.go:89] found id: ""
	I0731 21:30:42.631528 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.631540 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:42.631549 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:42.631626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:42.668358 1147424 cri.go:89] found id: ""
	I0731 21:30:42.668395 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.668408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:42.668416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:42.668484 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:42.701115 1147424 cri.go:89] found id: ""
	I0731 21:30:42.701150 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.701161 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:42.701170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:42.701248 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:42.736626 1147424 cri.go:89] found id: ""
	I0731 21:30:42.736665 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.736678 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:42.736687 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:42.736759 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:42.769864 1147424 cri.go:89] found id: ""
	I0731 21:30:42.769897 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.769904 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:42.769910 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:42.769964 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:42.803441 1147424 cri.go:89] found id: ""
	I0731 21:30:42.803477 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.803486 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:42.803497 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:42.803514 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.817556 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:42.817591 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:42.885011 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:42.885040 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:42.885055 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:42.964799 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:42.964851 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:43.015621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:43.015675 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:42.942984 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.943126 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.641436 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.139036 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.766405 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:46.766520 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.265061 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.568268 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:45.580867 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:45.580952 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:45.614028 1147424 cri.go:89] found id: ""
	I0731 21:30:45.614066 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.614076 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:45.614082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:45.614152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:45.650207 1147424 cri.go:89] found id: ""
	I0731 21:30:45.650235 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.650245 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:45.650254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:45.650321 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:45.684405 1147424 cri.go:89] found id: ""
	I0731 21:30:45.684433 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.684444 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:45.684452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:45.684540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:45.718355 1147424 cri.go:89] found id: ""
	I0731 21:30:45.718397 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.718408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:45.718416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:45.718501 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:45.755484 1147424 cri.go:89] found id: ""
	I0731 21:30:45.755532 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.755554 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:45.755563 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:45.755638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:45.791243 1147424 cri.go:89] found id: ""
	I0731 21:30:45.791277 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.791290 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:45.791298 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:45.791368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:45.827118 1147424 cri.go:89] found id: ""
	I0731 21:30:45.827157 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.827169 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:45.827177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:45.827244 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:45.866131 1147424 cri.go:89] found id: ""
	I0731 21:30:45.866166 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.866177 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:45.866191 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:45.866207 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:45.919945 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:45.919988 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:45.935650 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:45.935685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:46.008387 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:46.008417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:46.008437 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:46.087063 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:46.087119 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:48.626079 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:48.639423 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:48.639502 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:48.673340 1147424 cri.go:89] found id: ""
	I0731 21:30:48.673371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.673380 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:48.673388 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:48.673457 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:48.707662 1147424 cri.go:89] found id: ""
	I0731 21:30:48.707694 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.707704 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:48.707712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:48.707786 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:48.741679 1147424 cri.go:89] found id: ""
	I0731 21:30:48.741716 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.741728 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:48.741736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:48.741807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:48.780939 1147424 cri.go:89] found id: ""
	I0731 21:30:48.780969 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.780980 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:48.780987 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:48.781050 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:48.818882 1147424 cri.go:89] found id: ""
	I0731 21:30:48.818912 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.818920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:48.818927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:48.818982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:48.858012 1147424 cri.go:89] found id: ""
	I0731 21:30:48.858044 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.858056 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:48.858065 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:48.858140 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:48.894753 1147424 cri.go:89] found id: ""
	I0731 21:30:48.894787 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.894795 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:48.894802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:48.894863 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:48.927020 1147424 cri.go:89] found id: ""
	I0731 21:30:48.927056 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.927066 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:48.927078 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:48.927099 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:48.983634 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:48.983678 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:48.998249 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:48.998280 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:49.068981 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:49.069006 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:49.069024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:49.154613 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:49.154658 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:46.943398 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:48.953937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:47.139335 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.139858 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.139967 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.764837 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.265088 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.693023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:51.706145 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:51.706246 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:51.737003 1147424 cri.go:89] found id: ""
	I0731 21:30:51.737032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.737041 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:51.737046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:51.737114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:51.772405 1147424 cri.go:89] found id: ""
	I0731 21:30:51.772441 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.772452 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:51.772461 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:51.772518 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.805868 1147424 cri.go:89] found id: ""
	I0731 21:30:51.805900 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.805910 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:51.805918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:51.805986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:51.841996 1147424 cri.go:89] found id: ""
	I0731 21:30:51.842032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.842045 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:51.842054 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:51.842130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:51.874698 1147424 cri.go:89] found id: ""
	I0731 21:30:51.874734 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.874746 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:51.874755 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:51.874824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:51.908924 1147424 cri.go:89] found id: ""
	I0731 21:30:51.908955 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.908967 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:51.908973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:51.909037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:51.945056 1147424 cri.go:89] found id: ""
	I0731 21:30:51.945085 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.945096 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:51.945104 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:51.945167 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:51.979480 1147424 cri.go:89] found id: ""
	I0731 21:30:51.979513 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.979538 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:51.979552 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:51.979571 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:52.055960 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:52.055992 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:52.056009 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:52.132988 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:52.133039 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:52.172054 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:52.172098 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:52.226311 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:52.226355 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:54.741919 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:54.755241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:54.755319 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:54.789532 1147424 cri.go:89] found id: ""
	I0731 21:30:54.789563 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.789574 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:54.789583 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:54.789652 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:54.824196 1147424 cri.go:89] found id: ""
	I0731 21:30:54.824229 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.824240 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:54.824248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:54.824314 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.443199 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.944480 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.140181 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:55.144767 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:56.265184 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.765513 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.860579 1147424 cri.go:89] found id: ""
	I0731 21:30:54.860611 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.860620 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:54.860627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:54.860679 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:54.897438 1147424 cri.go:89] found id: ""
	I0731 21:30:54.897472 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.897484 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:54.897493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:54.897569 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:54.935283 1147424 cri.go:89] found id: ""
	I0731 21:30:54.935318 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.935330 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:54.935339 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:54.935409 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:54.970819 1147424 cri.go:89] found id: ""
	I0731 21:30:54.970850 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.970858 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:54.970865 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:54.970916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:55.004983 1147424 cri.go:89] found id: ""
	I0731 21:30:55.005019 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.005029 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:55.005038 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:55.005111 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:55.040711 1147424 cri.go:89] found id: ""
	I0731 21:30:55.040740 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.040749 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:55.040760 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:55.040774 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:55.117255 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:55.117290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:55.117308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:55.195423 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:55.195466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:55.234017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:55.234050 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:55.287518 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:55.287562 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:57.802888 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:57.816049 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:57.816152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:57.849582 1147424 cri.go:89] found id: ""
	I0731 21:30:57.849616 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.849627 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:57.849635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:57.849713 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:57.883334 1147424 cri.go:89] found id: ""
	I0731 21:30:57.883371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.883382 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:57.883391 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:57.883459 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:57.917988 1147424 cri.go:89] found id: ""
	I0731 21:30:57.918018 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.918028 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:57.918034 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:57.918095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:57.956169 1147424 cri.go:89] found id: ""
	I0731 21:30:57.956205 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.956217 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:57.956229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:57.956296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:57.992259 1147424 cri.go:89] found id: ""
	I0731 21:30:57.992291 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.992301 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:57.992308 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:57.992371 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:58.027969 1147424 cri.go:89] found id: ""
	I0731 21:30:58.027996 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.028006 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:58.028013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:58.028065 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:58.063018 1147424 cri.go:89] found id: ""
	I0731 21:30:58.063048 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.063057 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:58.063064 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:58.063117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:58.097096 1147424 cri.go:89] found id: ""
	I0731 21:30:58.097131 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.097143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:58.097158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:58.097175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:58.137311 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:58.137341 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:58.186533 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:58.186575 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:58.200436 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:58.200469 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:58.270006 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:58.270033 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:58.270053 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:56.444446 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.942906 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.943227 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:57.639057 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.140108 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:01.265139 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:03.266080 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.855423 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:00.868032 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:00.868128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:00.901453 1147424 cri.go:89] found id: ""
	I0731 21:31:00.901486 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.901498 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:00.901506 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:00.901586 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:00.940566 1147424 cri.go:89] found id: ""
	I0731 21:31:00.940598 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.940614 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:00.940623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:00.940693 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:00.975729 1147424 cri.go:89] found id: ""
	I0731 21:31:00.975767 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.975778 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:00.975785 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:00.975852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:01.010713 1147424 cri.go:89] found id: ""
	I0731 21:31:01.010747 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.010759 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:01.010768 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:01.010842 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:01.044675 1147424 cri.go:89] found id: ""
	I0731 21:31:01.044709 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.044718 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:01.044725 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:01.044785 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:01.078574 1147424 cri.go:89] found id: ""
	I0731 21:31:01.078614 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.078625 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:01.078634 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:01.078696 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:01.116013 1147424 cri.go:89] found id: ""
	I0731 21:31:01.116051 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.116062 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:01.116071 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:01.116161 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:01.152596 1147424 cri.go:89] found id: ""
	I0731 21:31:01.152631 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.152640 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:01.152650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:01.152666 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:01.203674 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:01.203726 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:01.218212 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:01.218261 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:01.290579 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:01.290604 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:01.290621 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:01.369885 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:01.369929 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:03.910280 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:03.923195 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:03.923276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:03.958378 1147424 cri.go:89] found id: ""
	I0731 21:31:03.958411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.958420 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:03.958427 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:03.958496 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:03.993096 1147424 cri.go:89] found id: ""
	I0731 21:31:03.993128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.993139 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:03.993148 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:03.993219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:04.029519 1147424 cri.go:89] found id: ""
	I0731 21:31:04.029552 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.029561 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:04.029569 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:04.029625 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:04.065597 1147424 cri.go:89] found id: ""
	I0731 21:31:04.065633 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.065643 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:04.065652 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:04.065719 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:04.101708 1147424 cri.go:89] found id: ""
	I0731 21:31:04.101744 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.101755 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:04.101763 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:04.101835 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:04.137732 1147424 cri.go:89] found id: ""
	I0731 21:31:04.137773 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.137783 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:04.137792 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:04.137866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:04.173141 1147424 cri.go:89] found id: ""
	I0731 21:31:04.173173 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.173188 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:04.173197 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:04.173269 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:04.208707 1147424 cri.go:89] found id: ""
	I0731 21:31:04.208742 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.208753 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:04.208770 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:04.208789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:04.279384 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:04.279417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:04.279498 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:04.362158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:04.362203 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:04.401372 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:04.401412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:04.453988 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:04.454047 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:03.443745 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.942529 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:02.639283 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:04.639372 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.765358 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:08.265854 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:06.968373 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:06.982182 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:06.982268 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:07.018082 1147424 cri.go:89] found id: ""
	I0731 21:31:07.018112 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.018122 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:07.018129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:07.018197 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:07.050272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.050309 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.050319 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:07.050325 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:07.050392 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:07.085174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.085206 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.085215 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:07.085221 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:07.085285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:07.119239 1147424 cri.go:89] found id: ""
	I0731 21:31:07.119274 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.119282 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:07.119289 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:07.119353 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:07.156846 1147424 cri.go:89] found id: ""
	I0731 21:31:07.156876 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.156883 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:07.156889 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:07.156942 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:07.191272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.191305 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.191314 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:07.191320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:07.191384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:07.231174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.231209 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.231221 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:07.231231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:07.231295 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:07.266525 1147424 cri.go:89] found id: ""
	I0731 21:31:07.266551 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.266558 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:07.266567 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:07.266589 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:07.306626 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:07.306659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:07.360568 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:07.360625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:07.374630 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:07.374665 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:07.444054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:07.444081 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:07.444118 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:07.942657 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.943080 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:07.140848 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.639749 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.266538 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.268527 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.030591 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:10.043498 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:10.043571 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:10.076835 1147424 cri.go:89] found id: ""
	I0731 21:31:10.076875 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.076887 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:10.076897 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:10.076966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:10.111342 1147424 cri.go:89] found id: ""
	I0731 21:31:10.111384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.111396 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:10.111404 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:10.111473 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:10.146858 1147424 cri.go:89] found id: ""
	I0731 21:31:10.146896 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.146911 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:10.146920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:10.146989 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:10.180682 1147424 cri.go:89] found id: ""
	I0731 21:31:10.180717 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.180729 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:10.180738 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:10.180804 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:10.215147 1147424 cri.go:89] found id: ""
	I0731 21:31:10.215177 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.215186 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:10.215192 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:10.215249 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:10.248291 1147424 cri.go:89] found id: ""
	I0731 21:31:10.248327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.248336 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:10.248343 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:10.248398 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:10.284207 1147424 cri.go:89] found id: ""
	I0731 21:31:10.284241 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.284252 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:10.284259 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:10.284325 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:10.318286 1147424 cri.go:89] found id: ""
	I0731 21:31:10.318322 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.318331 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:10.318342 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:10.318356 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:10.368429 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:10.368476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:10.383638 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:10.383673 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:10.450696 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:10.450720 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:10.450742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:10.530413 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:10.530458 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.084947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:13.098074 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:13.098156 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:13.132915 1147424 cri.go:89] found id: ""
	I0731 21:31:13.132952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.132962 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:13.132968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:13.133037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:13.173568 1147424 cri.go:89] found id: ""
	I0731 21:31:13.173597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.173605 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:13.173612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:13.173668 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:13.207356 1147424 cri.go:89] found id: ""
	I0731 21:31:13.207388 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.207402 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:13.207411 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:13.207478 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:13.243452 1147424 cri.go:89] found id: ""
	I0731 21:31:13.243482 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.243493 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:13.243502 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:13.243587 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:13.278682 1147424 cri.go:89] found id: ""
	I0731 21:31:13.278719 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.278729 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:13.278736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:13.278794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:13.312698 1147424 cri.go:89] found id: ""
	I0731 21:31:13.312727 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.312735 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:13.312742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:13.312796 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:13.346223 1147424 cri.go:89] found id: ""
	I0731 21:31:13.346259 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.346270 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:13.346279 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:13.346350 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:13.380778 1147424 cri.go:89] found id: ""
	I0731 21:31:13.380819 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.380833 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:13.380847 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:13.380889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:13.394337 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:13.394372 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:13.472260 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:13.472290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:13.472308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:13.549561 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:13.549608 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.589373 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:13.589416 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:11.943150 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.443284 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.140029 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.641142 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.765639 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.265180 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.265765 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:16.143472 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:16.155966 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:16.156039 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:16.194187 1147424 cri.go:89] found id: ""
	I0731 21:31:16.194216 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.194224 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:16.194231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:16.194299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:16.228700 1147424 cri.go:89] found id: ""
	I0731 21:31:16.228738 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.228751 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:16.228760 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:16.228844 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:16.261597 1147424 cri.go:89] found id: ""
	I0731 21:31:16.261629 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.261640 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:16.261647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:16.261716 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:16.299664 1147424 cri.go:89] found id: ""
	I0731 21:31:16.299697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.299709 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:16.299718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:16.299780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:16.350144 1147424 cri.go:89] found id: ""
	I0731 21:31:16.350172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.350181 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:16.350188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:16.350254 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:16.385259 1147424 cri.go:89] found id: ""
	I0731 21:31:16.385294 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.385303 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:16.385310 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:16.385364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:16.419555 1147424 cri.go:89] found id: ""
	I0731 21:31:16.419597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.419610 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:16.419619 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:16.419714 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:16.455956 1147424 cri.go:89] found id: ""
	I0731 21:31:16.455993 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.456005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:16.456029 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:16.456048 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:16.493234 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:16.493269 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:16.544931 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:16.544975 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:16.559513 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:16.559553 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:16.625127 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.625158 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:16.625176 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.200306 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:19.213303 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:19.213393 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:19.247139 1147424 cri.go:89] found id: ""
	I0731 21:31:19.247171 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.247179 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:19.247186 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:19.247245 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:19.282630 1147424 cri.go:89] found id: ""
	I0731 21:31:19.282659 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.282668 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:19.282674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:19.282740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:19.317287 1147424 cri.go:89] found id: ""
	I0731 21:31:19.317327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.317338 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:19.317345 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:19.317410 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:19.352680 1147424 cri.go:89] found id: ""
	I0731 21:31:19.352718 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.352738 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:19.352747 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:19.352820 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:19.385653 1147424 cri.go:89] found id: ""
	I0731 21:31:19.385697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.385709 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:19.385718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:19.385794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:19.425552 1147424 cri.go:89] found id: ""
	I0731 21:31:19.425582 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.425591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:19.425598 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:19.425654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:19.461717 1147424 cri.go:89] found id: ""
	I0731 21:31:19.461753 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.461766 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:19.461775 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:19.461852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:19.497504 1147424 cri.go:89] found id: ""
	I0731 21:31:19.497542 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.497554 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:19.497567 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:19.497592 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.571818 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:19.571867 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:19.611053 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:19.611091 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:19.662174 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:19.662220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:19.676489 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:19.676526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:19.750718 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.943653 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.443833 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.140073 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.639048 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.639213 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.764897 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.765013 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:22.251175 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:22.265094 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:22.265186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:22.298628 1147424 cri.go:89] found id: ""
	I0731 21:31:22.298665 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.298676 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:22.298684 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:22.298754 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:22.336851 1147424 cri.go:89] found id: ""
	I0731 21:31:22.336888 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.336900 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:22.336909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:22.336982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:22.373362 1147424 cri.go:89] found id: ""
	I0731 21:31:22.373397 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.373409 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:22.373417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:22.373498 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:22.409578 1147424 cri.go:89] found id: ""
	I0731 21:31:22.409606 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.409614 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:22.409621 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:22.409675 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:22.446427 1147424 cri.go:89] found id: ""
	I0731 21:31:22.446458 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.446469 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:22.446477 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:22.446547 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:22.480629 1147424 cri.go:89] found id: ""
	I0731 21:31:22.480679 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.480691 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:22.480700 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:22.480769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:22.515017 1147424 cri.go:89] found id: ""
	I0731 21:31:22.515058 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.515070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:22.515078 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:22.515151 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:22.552433 1147424 cri.go:89] found id: ""
	I0731 21:31:22.552462 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.552470 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:22.552480 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:22.552493 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:22.567822 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:22.567862 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:22.640554 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:22.640585 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:22.640603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:22.732714 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:22.732776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:22.790478 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:22.790515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:21.941836 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.945561 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.639434 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.640934 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.765376 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.264346 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.352413 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:25.364739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:25.364828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:25.398119 1147424 cri.go:89] found id: ""
	I0731 21:31:25.398158 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.398171 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:25.398184 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:25.398255 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:25.432874 1147424 cri.go:89] found id: ""
	I0731 21:31:25.432908 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.432919 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:25.432928 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:25.432986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:25.467669 1147424 cri.go:89] found id: ""
	I0731 21:31:25.467702 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.467711 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:25.467717 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:25.467783 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:25.502331 1147424 cri.go:89] found id: ""
	I0731 21:31:25.502364 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.502373 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:25.502379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:25.502434 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:25.535888 1147424 cri.go:89] found id: ""
	I0731 21:31:25.535917 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.535924 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:25.535931 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:25.535990 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:25.568398 1147424 cri.go:89] found id: ""
	I0731 21:31:25.568427 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.568443 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:25.568451 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:25.568554 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:25.602724 1147424 cri.go:89] found id: ""
	I0731 21:31:25.602751 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.602759 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:25.602766 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:25.602825 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:25.635990 1147424 cri.go:89] found id: ""
	I0731 21:31:25.636021 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.636032 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:25.636045 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:25.636063 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:25.687984 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:25.688030 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:25.702979 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:25.703010 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:25.768470 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:25.768498 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:25.768519 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:25.845432 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:25.845481 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.383725 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:28.397046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:28.397130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:28.436675 1147424 cri.go:89] found id: ""
	I0731 21:31:28.436707 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.436716 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:28.436723 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:28.436780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:28.474084 1147424 cri.go:89] found id: ""
	I0731 21:31:28.474114 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.474122 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:28.474129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:28.474186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:28.512448 1147424 cri.go:89] found id: ""
	I0731 21:31:28.512485 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.512496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:28.512505 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:28.512575 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:28.557548 1147424 cri.go:89] found id: ""
	I0731 21:31:28.557579 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.557591 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:28.557599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:28.557664 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:28.600492 1147424 cri.go:89] found id: ""
	I0731 21:31:28.600526 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.600545 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:28.600553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:28.600628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:28.645067 1147424 cri.go:89] found id: ""
	I0731 21:31:28.645093 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.645101 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:28.645107 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:28.645171 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:28.678391 1147424 cri.go:89] found id: ""
	I0731 21:31:28.678431 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.678444 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:28.678452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:28.678522 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:28.712230 1147424 cri.go:89] found id: ""
	I0731 21:31:28.712260 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.712268 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:28.712278 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:28.712297 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:28.779362 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:28.779389 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:28.779403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:28.861192 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:28.861243 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.900747 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:28.900781 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:28.953135 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:28.953183 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:26.442998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.443518 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.943322 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.139072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.638724 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.264991 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.764482 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:31.467806 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:31.481274 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:31.481345 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:31.516704 1147424 cri.go:89] found id: ""
	I0731 21:31:31.516741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.516754 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:31.516765 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:31.516824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:31.553299 1147424 cri.go:89] found id: ""
	I0731 21:31:31.553332 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.553341 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:31.553348 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:31.553402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:31.587834 1147424 cri.go:89] found id: ""
	I0731 21:31:31.587864 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.587874 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:31.587881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:31.587939 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:31.623164 1147424 cri.go:89] found id: ""
	I0731 21:31:31.623194 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.623203 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:31.623209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:31.623265 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:31.659118 1147424 cri.go:89] found id: ""
	I0731 21:31:31.659151 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.659158 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:31.659165 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:31.659219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:31.697260 1147424 cri.go:89] found id: ""
	I0731 21:31:31.697297 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.697308 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:31.697317 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:31.697375 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:31.732585 1147424 cri.go:89] found id: ""
	I0731 21:31:31.732623 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.732635 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:31.732644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:31.732698 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:31.770922 1147424 cri.go:89] found id: ""
	I0731 21:31:31.770952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.770964 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:31.770976 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:31.770992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:31.823747 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:31.823805 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:31.837367 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:31.837406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:31.912937 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:31.912958 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:31.912972 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:31.991008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:31.991061 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.528933 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:34.552722 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:34.552807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:34.587277 1147424 cri.go:89] found id: ""
	I0731 21:31:34.587315 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.587326 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:34.587337 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:34.587417 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:34.619919 1147424 cri.go:89] found id: ""
	I0731 21:31:34.619952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.619961 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:34.619968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:34.620033 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:34.654967 1147424 cri.go:89] found id: ""
	I0731 21:31:34.655000 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.655007 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:34.655014 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:34.655066 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:34.689092 1147424 cri.go:89] found id: ""
	I0731 21:31:34.689128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.689139 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:34.689147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:34.689217 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:34.725112 1147424 cri.go:89] found id: ""
	I0731 21:31:34.725145 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.725153 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:34.725159 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:34.725215 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:34.760377 1147424 cri.go:89] found id: ""
	I0731 21:31:34.760411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.760422 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:34.760430 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:34.760500 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:34.796413 1147424 cri.go:89] found id: ""
	I0731 21:31:34.796445 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.796460 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:34.796468 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:34.796540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:34.833243 1147424 cri.go:89] found id: ""
	I0731 21:31:34.833277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.833288 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:34.833309 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:34.833328 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:32.943881 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:35.442928 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.638850 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.640521 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.766140 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.264336 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.268433 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.911486 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:34.911552 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.952167 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:34.952200 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:35.010995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:35.011041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:35.025756 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:35.025795 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:35.110465 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:37.610914 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:37.623848 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:37.623935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:37.660355 1147424 cri.go:89] found id: ""
	I0731 21:31:37.660384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.660392 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:37.660398 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:37.660456 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:37.694935 1147424 cri.go:89] found id: ""
	I0731 21:31:37.694966 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.694975 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:37.694982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:37.695048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:37.729438 1147424 cri.go:89] found id: ""
	I0731 21:31:37.729472 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.729485 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:37.729493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:37.729570 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:37.766412 1147424 cri.go:89] found id: ""
	I0731 21:31:37.766440 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.766449 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:37.766457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:37.766519 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:37.803830 1147424 cri.go:89] found id: ""
	I0731 21:31:37.803865 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.803875 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:37.803884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:37.803956 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:37.838698 1147424 cri.go:89] found id: ""
	I0731 21:31:37.838730 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.838741 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:37.838749 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:37.838819 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:37.873274 1147424 cri.go:89] found id: ""
	I0731 21:31:37.873312 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.873324 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:37.873332 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:37.873404 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:37.907801 1147424 cri.go:89] found id: ""
	I0731 21:31:37.907835 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.907859 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:37.907870 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:37.907893 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:37.962192 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:37.962233 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:37.976530 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:37.976577 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:38.048551 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:38.048584 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:38.048603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:38.122957 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:38.123003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:37.942944 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.442336 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.139834 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.141085 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.640176 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.766169 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:43.767226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.663623 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:40.677119 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:40.677184 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:40.710893 1147424 cri.go:89] found id: ""
	I0731 21:31:40.710923 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.710932 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:40.710939 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:40.710996 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:40.746166 1147424 cri.go:89] found id: ""
	I0731 21:31:40.746203 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.746216 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:40.746223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:40.746296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:40.789323 1147424 cri.go:89] found id: ""
	I0731 21:31:40.789353 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.789362 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:40.789368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:40.789433 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:40.826731 1147424 cri.go:89] found id: ""
	I0731 21:31:40.826766 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.826775 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:40.826782 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:40.826843 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:40.865533 1147424 cri.go:89] found id: ""
	I0731 21:31:40.865562 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.865570 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:40.865576 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:40.865628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:40.900523 1147424 cri.go:89] found id: ""
	I0731 21:31:40.900555 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.900564 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:40.900571 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:40.900628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:40.934140 1147424 cri.go:89] found id: ""
	I0731 21:31:40.934172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.934181 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:40.934187 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:40.934252 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:40.969989 1147424 cri.go:89] found id: ""
	I0731 21:31:40.970033 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.970045 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:40.970058 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:40.970076 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:41.021416 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:41.021464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:41.035947 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:41.035978 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:41.102101 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:41.102126 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:41.102141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:41.182412 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:41.182457 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:43.727586 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:43.740633 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:43.740725 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:43.775305 1147424 cri.go:89] found id: ""
	I0731 21:31:43.775343 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.775354 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:43.775363 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:43.775426 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:43.813410 1147424 cri.go:89] found id: ""
	I0731 21:31:43.813441 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.813449 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:43.813455 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:43.813510 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:43.848924 1147424 cri.go:89] found id: ""
	I0731 21:31:43.848959 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.848971 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:43.848979 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:43.849048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:43.884911 1147424 cri.go:89] found id: ""
	I0731 21:31:43.884950 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.884962 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:43.884971 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:43.885041 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:43.918244 1147424 cri.go:89] found id: ""
	I0731 21:31:43.918277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.918286 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:43.918292 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:43.918348 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:43.952166 1147424 cri.go:89] found id: ""
	I0731 21:31:43.952200 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.952211 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:43.952220 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:43.952299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:43.985756 1147424 cri.go:89] found id: ""
	I0731 21:31:43.985790 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.985850 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:43.985863 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:43.985916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:44.020480 1147424 cri.go:89] found id: ""
	I0731 21:31:44.020516 1147424 logs.go:276] 0 containers: []
	W0731 21:31:44.020528 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:44.020542 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:44.020560 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:44.058344 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:44.058398 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:44.110703 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:44.110751 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:44.124735 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:44.124771 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:44.193412 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:44.193445 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:44.193463 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:42.442910 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.443829 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.140083 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.640177 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.265466 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.265667 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.775651 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:46.789288 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:46.789384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.822997 1147424 cri.go:89] found id: ""
	I0731 21:31:46.823032 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.823044 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:46.823053 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:46.823123 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:46.857000 1147424 cri.go:89] found id: ""
	I0731 21:31:46.857030 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.857039 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:46.857046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:46.857112 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:46.890362 1147424 cri.go:89] found id: ""
	I0731 21:31:46.890392 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.890404 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:46.890417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:46.890483 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:46.922819 1147424 cri.go:89] found id: ""
	I0731 21:31:46.922848 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.922864 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:46.922871 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:46.922935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:46.957333 1147424 cri.go:89] found id: ""
	I0731 21:31:46.957363 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.957371 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:46.957376 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:46.957444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:46.990795 1147424 cri.go:89] found id: ""
	I0731 21:31:46.990830 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.990840 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:46.990849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:46.990922 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:47.025144 1147424 cri.go:89] found id: ""
	I0731 21:31:47.025174 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.025185 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:47.025194 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:47.025263 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:47.062624 1147424 cri.go:89] found id: ""
	I0731 21:31:47.062658 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.062667 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:47.062677 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:47.062691 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:47.112698 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:47.112742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:47.127240 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:47.127276 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:47.195034 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:47.195062 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:47.195081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:47.277532 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:47.277574 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:49.814610 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:49.828213 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:49.828291 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.944364 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.442118 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.640243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.640580 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.764302 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:52.764441 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.861951 1147424 cri.go:89] found id: ""
	I0731 21:31:49.861982 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.861991 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:49.861998 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:49.862054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:49.898601 1147424 cri.go:89] found id: ""
	I0731 21:31:49.898630 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.898638 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:49.898644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:49.898711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:49.933615 1147424 cri.go:89] found id: ""
	I0731 21:31:49.933652 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.933665 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:49.933673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:49.933742 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:49.970356 1147424 cri.go:89] found id: ""
	I0731 21:31:49.970395 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.970416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:49.970425 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:49.970503 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:50.004186 1147424 cri.go:89] found id: ""
	I0731 21:31:50.004220 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.004232 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:50.004241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:50.004316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:50.037701 1147424 cri.go:89] found id: ""
	I0731 21:31:50.037741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.037753 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:50.037761 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:50.037834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:50.074358 1147424 cri.go:89] found id: ""
	I0731 21:31:50.074390 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.074399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:50.074409 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:50.074474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:50.109052 1147424 cri.go:89] found id: ""
	I0731 21:31:50.109083 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.109091 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:50.109101 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:50.109116 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:50.167891 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:50.167935 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:50.181132 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:50.181179 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:50.247835 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:50.247865 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:50.247882 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:50.328733 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:50.328779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:52.867344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:52.880275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:52.880355 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:52.913980 1147424 cri.go:89] found id: ""
	I0731 21:31:52.914015 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.914024 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:52.914030 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:52.914095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:52.947833 1147424 cri.go:89] found id: ""
	I0731 21:31:52.947866 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.947874 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:52.947880 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:52.947947 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:52.981345 1147424 cri.go:89] found id: ""
	I0731 21:31:52.981380 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.981393 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:52.981401 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:52.981470 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:53.016253 1147424 cri.go:89] found id: ""
	I0731 21:31:53.016283 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.016292 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:53.016299 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:53.016351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:53.049683 1147424 cri.go:89] found id: ""
	I0731 21:31:53.049716 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.049726 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:53.049734 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:53.049807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:53.082171 1147424 cri.go:89] found id: ""
	I0731 21:31:53.082217 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.082228 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:53.082237 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:53.082308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:53.114595 1147424 cri.go:89] found id: ""
	I0731 21:31:53.114640 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.114658 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:53.114667 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:53.114739 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:53.151612 1147424 cri.go:89] found id: ""
	I0731 21:31:53.151644 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.151672 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:53.151686 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:53.151702 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:53.203251 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:53.203293 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:53.219234 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:53.219272 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:53.290273 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:53.290292 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:53.290306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:53.367967 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:53.368023 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:51.443058 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.943272 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.141370 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.638859 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.264069 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.265286 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.909173 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:55.922278 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:55.922351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:55.959354 1147424 cri.go:89] found id: ""
	I0731 21:31:55.959389 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.959397 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:55.959403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:55.959467 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:55.998507 1147424 cri.go:89] found id: ""
	I0731 21:31:55.998544 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.998557 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:55.998566 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:55.998638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:56.034763 1147424 cri.go:89] found id: ""
	I0731 21:31:56.034811 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.034824 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:56.034833 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:56.034914 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:56.068685 1147424 cri.go:89] found id: ""
	I0731 21:31:56.068726 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.068737 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:56.068746 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:56.068833 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:56.105785 1147424 cri.go:89] found id: ""
	I0731 21:31:56.105824 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.105837 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:56.105845 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:56.105920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:56.142701 1147424 cri.go:89] found id: ""
	I0731 21:31:56.142732 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.142744 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:56.142752 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:56.142834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:56.177016 1147424 cri.go:89] found id: ""
	I0731 21:31:56.177064 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.177077 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:56.177089 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:56.177163 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:56.211989 1147424 cri.go:89] found id: ""
	I0731 21:31:56.212026 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.212038 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:56.212052 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:56.212070 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:56.263995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:56.264045 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:56.277535 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:56.277570 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:56.343150 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:56.343179 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:56.343199 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:56.425361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:56.425406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:58.965276 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:58.978115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:58.978190 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:59.011793 1147424 cri.go:89] found id: ""
	I0731 21:31:59.011829 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.011840 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:59.011849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:59.011921 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:59.048117 1147424 cri.go:89] found id: ""
	I0731 21:31:59.048153 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.048164 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:59.048172 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:59.048240 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:59.081955 1147424 cri.go:89] found id: ""
	I0731 21:31:59.081985 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.081996 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:59.082004 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:59.082072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:59.116269 1147424 cri.go:89] found id: ""
	I0731 21:31:59.116308 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.116321 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:59.116330 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:59.116396 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:59.152551 1147424 cri.go:89] found id: ""
	I0731 21:31:59.152580 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.152592 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:59.152599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:59.152669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:59.186708 1147424 cri.go:89] found id: ""
	I0731 21:31:59.186749 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.186758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:59.186764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:59.186830 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:59.223628 1147424 cri.go:89] found id: ""
	I0731 21:31:59.223681 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.223690 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:59.223698 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:59.223773 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:59.256867 1147424 cri.go:89] found id: ""
	I0731 21:31:59.256901 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.256913 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:59.256925 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:59.256944 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:59.307167 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:59.307209 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:59.320958 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:59.320992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:59.390776 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:59.390798 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:59.390813 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:59.467482 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:59.467534 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:56.445461 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:58.943434 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.639271 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:00.139778 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:59.764344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:01.765157 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:04.264512 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.005084 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:02.017546 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:02.017635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:02.053094 1147424 cri.go:89] found id: ""
	I0731 21:32:02.053135 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.053146 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:02.053155 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:02.053212 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:02.087483 1147424 cri.go:89] found id: ""
	I0731 21:32:02.087517 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.087535 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:02.087543 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:02.087600 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:02.123647 1147424 cri.go:89] found id: ""
	I0731 21:32:02.123685 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.123696 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:02.123706 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:02.123764 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:02.157798 1147424 cri.go:89] found id: ""
	I0731 21:32:02.157828 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.157837 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:02.157843 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:02.157899 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:02.190266 1147424 cri.go:89] found id: ""
	I0731 21:32:02.190297 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.190309 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:02.190318 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:02.190377 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:02.232507 1147424 cri.go:89] found id: ""
	I0731 21:32:02.232537 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.232546 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:02.232552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:02.232605 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:02.270105 1147424 cri.go:89] found id: ""
	I0731 21:32:02.270133 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.270144 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:02.270152 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:02.270221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:02.304599 1147424 cri.go:89] found id: ""
	I0731 21:32:02.304631 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.304642 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:02.304654 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:02.304671 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:02.356686 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:02.356727 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:02.370114 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:02.370147 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:02.437753 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:02.437778 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:02.437797 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:02.518085 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:02.518131 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:01.443142 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:03.943209 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.640855 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.141191 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:06.265050 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.265389 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.071289 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:05.084496 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:05.084579 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:05.124178 1147424 cri.go:89] found id: ""
	I0731 21:32:05.124208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.124218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:05.124224 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:05.124279 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:05.162119 1147424 cri.go:89] found id: ""
	I0731 21:32:05.162155 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.162167 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:05.162173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:05.162237 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:05.198445 1147424 cri.go:89] found id: ""
	I0731 21:32:05.198483 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.198496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:05.198504 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:05.198615 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:05.240678 1147424 cri.go:89] found id: ""
	I0731 21:32:05.240702 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.240711 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:05.240718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:05.240770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:05.276910 1147424 cri.go:89] found id: ""
	I0731 21:32:05.276942 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.276965 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:05.276974 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:05.277051 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:05.310130 1147424 cri.go:89] found id: ""
	I0731 21:32:05.310158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.310166 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:05.310173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:05.310227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:05.345144 1147424 cri.go:89] found id: ""
	I0731 21:32:05.345179 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.345191 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:05.345199 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:05.345267 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:05.386723 1147424 cri.go:89] found id: ""
	I0731 21:32:05.386766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.386778 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:05.386792 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:05.386809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:05.425852 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:05.425887 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:05.482401 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:05.482447 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:05.495888 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:05.495918 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:05.562121 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:05.562153 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:05.562174 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.140837 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:08.153503 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:08.153585 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:08.187113 1147424 cri.go:89] found id: ""
	I0731 21:32:08.187143 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.187155 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:08.187164 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:08.187226 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:08.219853 1147424 cri.go:89] found id: ""
	I0731 21:32:08.219888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.219898 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:08.219906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:08.219976 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:08.253817 1147424 cri.go:89] found id: ""
	I0731 21:32:08.253848 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.253857 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:08.253864 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:08.253930 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:08.307069 1147424 cri.go:89] found id: ""
	I0731 21:32:08.307096 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.307104 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:08.307111 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:08.307176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:08.349604 1147424 cri.go:89] found id: ""
	I0731 21:32:08.349632 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.349641 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:08.349648 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:08.349711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:08.382966 1147424 cri.go:89] found id: ""
	I0731 21:32:08.383000 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.383013 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:08.383022 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:08.383080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:08.416904 1147424 cri.go:89] found id: ""
	I0731 21:32:08.416938 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.416950 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:08.416958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:08.417021 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:08.451024 1147424 cri.go:89] found id: ""
	I0731 21:32:08.451061 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.451074 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:08.451087 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:08.451103 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.530394 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:08.530441 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:08.567554 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:08.567583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:08.616162 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:08.616208 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:08.629228 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:08.629264 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:08.700820 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:06.441762 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.443004 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.942870 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:07.638970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.139278 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.764866 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:13.265303 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:11.201091 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:11.213847 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:11.213920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:11.248925 1147424 cri.go:89] found id: ""
	I0731 21:32:11.248963 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.248974 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:11.248982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:11.249054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:11.286134 1147424 cri.go:89] found id: ""
	I0731 21:32:11.286168 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.286185 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:11.286193 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:11.286261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:11.321493 1147424 cri.go:89] found id: ""
	I0731 21:32:11.321524 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.321534 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:11.321542 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:11.321610 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:11.356679 1147424 cri.go:89] found id: ""
	I0731 21:32:11.356708 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.356724 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:11.356731 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:11.356788 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:11.390757 1147424 cri.go:89] found id: ""
	I0731 21:32:11.390785 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.390795 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:11.390802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:11.390868 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:11.424687 1147424 cri.go:89] found id: ""
	I0731 21:32:11.424724 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.424736 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:11.424745 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:11.424816 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:11.458542 1147424 cri.go:89] found id: ""
	I0731 21:32:11.458579 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.458590 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:11.458599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:11.458678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:11.490956 1147424 cri.go:89] found id: ""
	I0731 21:32:11.490999 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.491009 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:11.491020 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:11.491036 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:11.541013 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:11.541057 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:11.554729 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:11.554760 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:11.619828 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:11.619868 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:11.619894 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:11.697785 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:11.697837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.235153 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:14.247701 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:14.247770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:14.282802 1147424 cri.go:89] found id: ""
	I0731 21:32:14.282835 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.282846 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:14.282854 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:14.282926 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:14.316106 1147424 cri.go:89] found id: ""
	I0731 21:32:14.316158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.316168 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:14.316175 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:14.316235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:14.349319 1147424 cri.go:89] found id: ""
	I0731 21:32:14.349358 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.349370 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:14.349379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:14.349446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:14.385630 1147424 cri.go:89] found id: ""
	I0731 21:32:14.385665 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.385674 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:14.385681 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:14.385745 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:14.422054 1147424 cri.go:89] found id: ""
	I0731 21:32:14.422090 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.422104 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:14.422113 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:14.422176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:14.456170 1147424 cri.go:89] found id: ""
	I0731 21:32:14.456207 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.456216 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:14.456223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:14.456283 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:14.489571 1147424 cri.go:89] found id: ""
	I0731 21:32:14.489611 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.489622 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:14.489632 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:14.489709 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:14.524764 1147424 cri.go:89] found id: ""
	I0731 21:32:14.524803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.524814 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:14.524827 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:14.524843 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:14.598487 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:14.598511 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:14.598526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:14.675912 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:14.675954 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.722740 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:14.722778 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:14.780558 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:14.780604 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:13.441757 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.442992 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:12.140024 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:14.638468 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:16.639109 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.764963 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.265010 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:17.300221 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:17.313242 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:17.313309 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:17.349244 1147424 cri.go:89] found id: ""
	I0731 21:32:17.349276 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.349284 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:17.349293 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:17.349364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:17.382158 1147424 cri.go:89] found id: ""
	I0731 21:32:17.382188 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.382196 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:17.382203 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:17.382276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:17.416250 1147424 cri.go:89] found id: ""
	I0731 21:32:17.416283 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.416295 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:17.416304 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:17.416363 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:17.449192 1147424 cri.go:89] found id: ""
	I0731 21:32:17.449229 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.449240 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:17.449249 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:17.449316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:17.482189 1147424 cri.go:89] found id: ""
	I0731 21:32:17.482223 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.482235 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:17.482244 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:17.482308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:17.516284 1147424 cri.go:89] found id: ""
	I0731 21:32:17.516312 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.516320 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:17.516327 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:17.516380 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:17.550025 1147424 cri.go:89] found id: ""
	I0731 21:32:17.550059 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.550070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:17.550077 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:17.550142 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:17.582378 1147424 cri.go:89] found id: ""
	I0731 21:32:17.582411 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.582424 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:17.582488 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:17.582513 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:17.635593 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:17.635640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:17.649694 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:17.649734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:17.716275 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:17.716301 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:17.716316 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:17.800261 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:17.800327 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:17.942859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:19.943179 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.639313 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.639947 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.265670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:22.764461 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.339222 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:20.353494 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:20.353574 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:20.387397 1147424 cri.go:89] found id: ""
	I0731 21:32:20.387432 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.387441 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:20.387449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:20.387534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:20.421038 1147424 cri.go:89] found id: ""
	I0731 21:32:20.421074 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.421082 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:20.421088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:20.421200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:20.461171 1147424 cri.go:89] found id: ""
	I0731 21:32:20.461208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.461221 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:20.461229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:20.461297 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:20.529655 1147424 cri.go:89] found id: ""
	I0731 21:32:20.529692 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.529704 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:20.529712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:20.529779 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:20.584293 1147424 cri.go:89] found id: ""
	I0731 21:32:20.584327 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.584337 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:20.584344 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:20.584399 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:20.617177 1147424 cri.go:89] found id: ""
	I0731 21:32:20.617209 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.617220 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:20.617226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:20.617282 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:20.657058 1147424 cri.go:89] found id: ""
	I0731 21:32:20.657094 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.657104 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:20.657112 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:20.657181 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:20.689987 1147424 cri.go:89] found id: ""
	I0731 21:32:20.690016 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.690026 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:20.690038 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:20.690058 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:20.702274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:20.702310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:20.766054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:20.766088 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:20.766106 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:20.850776 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:20.850823 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:20.888735 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:20.888766 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.440658 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:23.453529 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:23.453616 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:23.487210 1147424 cri.go:89] found id: ""
	I0731 21:32:23.487249 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.487263 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:23.487271 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:23.487338 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:23.520656 1147424 cri.go:89] found id: ""
	I0731 21:32:23.520697 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.520709 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:23.520718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:23.520794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:23.557952 1147424 cri.go:89] found id: ""
	I0731 21:32:23.557982 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.557991 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:23.557999 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:23.558052 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:23.591428 1147424 cri.go:89] found id: ""
	I0731 21:32:23.591458 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.591466 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:23.591473 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:23.591537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:23.624978 1147424 cri.go:89] found id: ""
	I0731 21:32:23.625009 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.625019 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:23.625026 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:23.625080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:23.659424 1147424 cri.go:89] found id: ""
	I0731 21:32:23.659460 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.659473 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:23.659482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:23.659557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:23.696695 1147424 cri.go:89] found id: ""
	I0731 21:32:23.696733 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.696745 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:23.696753 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:23.696818 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:23.734067 1147424 cri.go:89] found id: ""
	I0731 21:32:23.734097 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.734106 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:23.734116 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:23.734130 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.787432 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:23.787476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:23.801116 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:23.801154 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:23.867801 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:23.867840 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:23.867859 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:23.952393 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:23.952435 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:22.442859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:24.943043 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:23.139590 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.140770 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.264790 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.763670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:26.490759 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:26.503050 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:26.503120 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:26.536191 1147424 cri.go:89] found id: ""
	I0731 21:32:26.536239 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.536251 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:26.536260 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:26.536330 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:26.571038 1147424 cri.go:89] found id: ""
	I0731 21:32:26.571075 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.571088 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:26.571096 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:26.571164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:26.605295 1147424 cri.go:89] found id: ""
	I0731 21:32:26.605333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.605346 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:26.605355 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:26.605422 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:26.644430 1147424 cri.go:89] found id: ""
	I0731 21:32:26.644472 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.644482 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:26.644489 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:26.644553 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:26.675985 1147424 cri.go:89] found id: ""
	I0731 21:32:26.676020 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.676033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:26.676041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:26.676128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:26.707738 1147424 cri.go:89] found id: ""
	I0731 21:32:26.707766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.707780 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:26.707787 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:26.707850 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:26.743969 1147424 cri.go:89] found id: ""
	I0731 21:32:26.743998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.744007 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:26.744013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:26.744067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:26.782301 1147424 cri.go:89] found id: ""
	I0731 21:32:26.782333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.782346 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:26.782361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:26.782377 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:26.818548 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:26.818580 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.870586 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:26.870632 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:26.883944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:26.883983 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:26.951603 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:26.951630 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:26.951648 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:29.527796 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:29.540627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:29.540862 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:29.575513 1147424 cri.go:89] found id: ""
	I0731 21:32:29.575544 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.575553 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:29.575559 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:29.575627 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:29.607395 1147424 cri.go:89] found id: ""
	I0731 21:32:29.607425 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.607434 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:29.607440 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:29.607505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:29.641509 1147424 cri.go:89] found id: ""
	I0731 21:32:29.641539 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.641548 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:29.641553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:29.641604 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:29.673166 1147424 cri.go:89] found id: ""
	I0731 21:32:29.673197 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.673207 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:29.673215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:29.673285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:29.703698 1147424 cri.go:89] found id: ""
	I0731 21:32:29.703744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.703752 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:29.703759 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:29.703821 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:29.738704 1147424 cri.go:89] found id: ""
	I0731 21:32:29.738746 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.738758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:29.738767 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:29.738858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:29.771359 1147424 cri.go:89] found id: ""
	I0731 21:32:29.771388 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.771399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:29.771407 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:29.771474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:29.806579 1147424 cri.go:89] found id: ""
	I0731 21:32:29.806614 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.806625 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:29.806635 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:29.806649 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.943079 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.442599 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.638623 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.639949 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.764393 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:31.764649 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.764888 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.857957 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:29.857994 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:29.871348 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:29.871387 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:29.942833 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:29.942864 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:29.942880 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:30.027254 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:30.027306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:32.565077 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:32.577796 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:32.577878 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:32.611725 1147424 cri.go:89] found id: ""
	I0731 21:32:32.611762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.611774 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:32.611783 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:32.611859 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:32.647901 1147424 cri.go:89] found id: ""
	I0731 21:32:32.647939 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.647951 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:32.647959 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:32.648018 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:32.681042 1147424 cri.go:89] found id: ""
	I0731 21:32:32.681073 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.681084 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:32.681091 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:32.681162 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:32.716141 1147424 cri.go:89] found id: ""
	I0731 21:32:32.716173 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.716182 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:32.716188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:32.716242 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:32.753207 1147424 cri.go:89] found id: ""
	I0731 21:32:32.753236 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.753244 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:32.753250 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:32.753301 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:32.787591 1147424 cri.go:89] found id: ""
	I0731 21:32:32.787619 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.787628 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:32.787635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:32.787717 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:32.822430 1147424 cri.go:89] found id: ""
	I0731 21:32:32.822464 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.822476 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:32.822484 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:32.822544 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:32.854566 1147424 cri.go:89] found id: ""
	I0731 21:32:32.854600 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.854609 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:32.854621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:32.854636 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:32.905256 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:32.905310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:32.918575 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:32.918607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:32.981644 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:32.981669 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:32.981685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:33.062767 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:33.062814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:31.443380 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.942793 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.943502 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:32.139483 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:34.140185 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.638720 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.264481 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.265008 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.599598 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:35.612328 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:35.612403 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:35.647395 1147424 cri.go:89] found id: ""
	I0731 21:32:35.647428 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.647439 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:35.647448 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:35.647514 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:35.682339 1147424 cri.go:89] found id: ""
	I0731 21:32:35.682370 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.682378 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:35.682384 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:35.682440 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:35.721727 1147424 cri.go:89] found id: ""
	I0731 21:32:35.721762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.721775 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:35.721784 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:35.721866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:35.754648 1147424 cri.go:89] found id: ""
	I0731 21:32:35.754678 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.754688 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:35.754697 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:35.754761 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:35.787880 1147424 cri.go:89] found id: ""
	I0731 21:32:35.787910 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.787922 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:35.787930 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:35.788004 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:35.822619 1147424 cri.go:89] found id: ""
	I0731 21:32:35.822656 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.822668 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:35.822677 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:35.822743 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:35.856160 1147424 cri.go:89] found id: ""
	I0731 21:32:35.856198 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.856210 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:35.856219 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:35.856284 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:35.888842 1147424 cri.go:89] found id: ""
	I0731 21:32:35.888881 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.888893 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:35.888906 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:35.888924 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:35.956296 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:35.956323 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:35.956342 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:36.039485 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:36.039531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:36.081202 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:36.081247 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:36.130789 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:36.130831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:38.647723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:38.660334 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:38.660405 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:38.696782 1147424 cri.go:89] found id: ""
	I0731 21:32:38.696813 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.696822 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:38.696828 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:38.696887 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:38.731835 1147424 cri.go:89] found id: ""
	I0731 21:32:38.731874 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.731887 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:38.731895 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:38.731969 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:38.768894 1147424 cri.go:89] found id: ""
	I0731 21:32:38.768924 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.768935 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:38.768943 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:38.769012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:38.802331 1147424 cri.go:89] found id: ""
	I0731 21:32:38.802361 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.802370 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:38.802377 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:38.802430 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:38.835822 1147424 cri.go:89] found id: ""
	I0731 21:32:38.835852 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.835864 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:38.835881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:38.835940 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:38.869104 1147424 cri.go:89] found id: ""
	I0731 21:32:38.869141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.869153 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:38.869162 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:38.869234 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:38.907732 1147424 cri.go:89] found id: ""
	I0731 21:32:38.907769 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.907781 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:38.907789 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:38.907858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:38.942961 1147424 cri.go:89] found id: ""
	I0731 21:32:38.942994 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.943005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:38.943017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:38.943032 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:38.997537 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:38.997584 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:39.011711 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:39.011745 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:39.082834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:39.082861 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:39.082878 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:39.168702 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:39.168758 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:38.442196 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.943085 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.639586 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.140158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.764887 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.265118 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.706713 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:41.720209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:41.720298 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:41.752969 1147424 cri.go:89] found id: ""
	I0731 21:32:41.753005 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.753016 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:41.753025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:41.753095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:41.786502 1147424 cri.go:89] found id: ""
	I0731 21:32:41.786542 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.786555 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:41.786564 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:41.786635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:41.819958 1147424 cri.go:89] found id: ""
	I0731 21:32:41.819989 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.820000 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:41.820008 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:41.820073 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:41.855104 1147424 cri.go:89] found id: ""
	I0731 21:32:41.855141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.855153 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:41.855161 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:41.855228 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:41.889375 1147424 cri.go:89] found id: ""
	I0731 21:32:41.889413 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.889423 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:41.889429 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:41.889505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:41.925172 1147424 cri.go:89] found id: ""
	I0731 21:32:41.925199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.925208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:41.925215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:41.925278 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:41.960951 1147424 cri.go:89] found id: ""
	I0731 21:32:41.960995 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.961009 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:41.961017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:41.961086 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:41.996458 1147424 cri.go:89] found id: ""
	I0731 21:32:41.996493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.996506 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:41.996519 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:41.996537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:42.048841 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:42.048889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:42.062235 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:42.062271 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:42.131510 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:42.131536 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:42.131551 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:42.216993 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:42.217035 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:44.756236 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:44.769719 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:44.769800 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:44.808963 1147424 cri.go:89] found id: ""
	I0731 21:32:44.808998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.809009 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:44.809017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:44.809095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:44.843163 1147424 cri.go:89] found id: ""
	I0731 21:32:44.843199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.843212 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:44.843225 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:44.843287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:42.943536 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.443141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.140264 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.140607 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.764757 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.765226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:44.877440 1147424 cri.go:89] found id: ""
	I0731 21:32:44.877468 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.877477 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:44.877483 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:44.877537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:44.911877 1147424 cri.go:89] found id: ""
	I0731 21:32:44.911906 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.911915 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:44.911922 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:44.911974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:44.945516 1147424 cri.go:89] found id: ""
	I0731 21:32:44.945547 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.945558 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:44.945565 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:44.945634 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:44.983858 1147424 cri.go:89] found id: ""
	I0731 21:32:44.983890 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.983898 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:44.983906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:44.983981 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:45.017030 1147424 cri.go:89] found id: ""
	I0731 21:32:45.017064 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.017075 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:45.017084 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:45.017154 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:45.051005 1147424 cri.go:89] found id: ""
	I0731 21:32:45.051040 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.051053 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:45.051064 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:45.051077 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:45.100602 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:45.100646 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:45.113843 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:45.113891 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:45.187725 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:45.187760 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:45.187779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:45.273549 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:45.273588 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:47.813567 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:47.826674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:47.826762 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:47.863746 1147424 cri.go:89] found id: ""
	I0731 21:32:47.863781 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.863789 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:47.863797 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:47.863860 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:47.901125 1147424 cri.go:89] found id: ""
	I0731 21:32:47.901158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.901169 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:47.901177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:47.901247 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:47.936510 1147424 cri.go:89] found id: ""
	I0731 21:32:47.936543 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.936553 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:47.936560 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:47.936618 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:47.972712 1147424 cri.go:89] found id: ""
	I0731 21:32:47.972744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.972754 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:47.972764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:47.972828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:48.007785 1147424 cri.go:89] found id: ""
	I0731 21:32:48.007818 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.007831 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:48.007839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:48.007907 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:48.045821 1147424 cri.go:89] found id: ""
	I0731 21:32:48.045851 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.045863 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:48.045872 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:48.045945 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:48.083790 1147424 cri.go:89] found id: ""
	I0731 21:32:48.083823 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.083832 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:48.083839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:48.083903 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:48.122430 1147424 cri.go:89] found id: ""
	I0731 21:32:48.122465 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.122477 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:48.122490 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:48.122505 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:48.200081 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:48.200140 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:48.240500 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:48.240537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:48.292336 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:48.292393 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:48.305398 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:48.305431 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:48.381327 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:47.943158 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.945740 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.638897 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.640039 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.269263 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.765262 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.881554 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:50.894655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:50.894740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:50.928819 1147424 cri.go:89] found id: ""
	I0731 21:32:50.928861 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.928873 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:50.928882 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:50.928950 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:50.962856 1147424 cri.go:89] found id: ""
	I0731 21:32:50.962897 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.962908 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:50.962917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:50.962980 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:50.995765 1147424 cri.go:89] found id: ""
	I0731 21:32:50.995803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.995815 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:50.995823 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:50.995892 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:51.034418 1147424 cri.go:89] found id: ""
	I0731 21:32:51.034454 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.034467 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:51.034476 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:51.034534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:51.070687 1147424 cri.go:89] found id: ""
	I0731 21:32:51.070723 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.070732 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:51.070739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:51.070828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:51.106934 1147424 cri.go:89] found id: ""
	I0731 21:32:51.106959 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.106966 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:51.106973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:51.107026 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:51.143489 1147424 cri.go:89] found id: ""
	I0731 21:32:51.143513 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.143522 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:51.143530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:51.143591 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:51.180778 1147424 cri.go:89] found id: ""
	I0731 21:32:51.180806 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.180816 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:51.180827 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:51.180842 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:51.194695 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:51.194734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:51.262172 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:51.262200 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:51.262220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:51.344678 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:51.344719 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:51.383624 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:51.383659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:53.936339 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:53.950362 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:53.950446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:53.984346 1147424 cri.go:89] found id: ""
	I0731 21:32:53.984376 1147424 logs.go:276] 0 containers: []
	W0731 21:32:53.984391 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:53.984403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:53.984481 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:54.019937 1147424 cri.go:89] found id: ""
	I0731 21:32:54.019973 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.019986 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:54.019994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:54.020070 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:54.056068 1147424 cri.go:89] found id: ""
	I0731 21:32:54.056120 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.056133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:54.056142 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:54.056221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:54.094375 1147424 cri.go:89] found id: ""
	I0731 21:32:54.094407 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.094416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:54.094422 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:54.094486 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:54.130326 1147424 cri.go:89] found id: ""
	I0731 21:32:54.130362 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.130374 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:54.130383 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:54.130444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:54.168190 1147424 cri.go:89] found id: ""
	I0731 21:32:54.168228 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.168239 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:54.168248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:54.168329 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:54.201946 1147424 cri.go:89] found id: ""
	I0731 21:32:54.201979 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.201988 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:54.201994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:54.202055 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:54.233852 1147424 cri.go:89] found id: ""
	I0731 21:32:54.233888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.233896 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:54.233907 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:54.233922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:54.287620 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:54.287664 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:54.309984 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:54.310019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:54.382751 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:54.382774 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:54.382789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:54.460042 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:54.460105 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:52.443844 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.943970 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.140449 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.141072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:56.639439 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:55.264301 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.265478 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.002945 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:57.015673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:57.015763 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:57.049464 1147424 cri.go:89] found id: ""
	I0731 21:32:57.049493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.049502 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:57.049509 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:57.049561 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:57.083326 1147424 cri.go:89] found id: ""
	I0731 21:32:57.083356 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.083365 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:57.083371 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:57.083431 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:57.115103 1147424 cri.go:89] found id: ""
	I0731 21:32:57.115132 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.115141 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:57.115147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:57.115200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:57.153178 1147424 cri.go:89] found id: ""
	I0731 21:32:57.153214 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.153226 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:57.153234 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:57.153310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:57.187940 1147424 cri.go:89] found id: ""
	I0731 21:32:57.187980 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.187992 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:57.188001 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:57.188072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:57.221825 1147424 cri.go:89] found id: ""
	I0731 21:32:57.221858 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.221868 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:57.221884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:57.221948 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:57.255087 1147424 cri.go:89] found id: ""
	I0731 21:32:57.255115 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.255128 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:57.255137 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:57.255207 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:57.290095 1147424 cri.go:89] found id: ""
	I0731 21:32:57.290131 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.290143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:57.290157 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:57.290175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:57.343777 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:57.343819 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:57.356944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:57.356981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:57.431220 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:57.431248 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:57.431267 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:57.518079 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:57.518123 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:57.442671 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.942490 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:58.639801 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.139266 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.764738 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.765367 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:04.265447 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:00.056208 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:00.069424 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:00.069511 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:00.105855 1147424 cri.go:89] found id: ""
	I0731 21:33:00.105891 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.105902 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:00.105909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:00.105984 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:00.143079 1147424 cri.go:89] found id: ""
	I0731 21:33:00.143109 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.143120 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:00.143128 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:00.143195 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:00.178114 1147424 cri.go:89] found id: ""
	I0731 21:33:00.178150 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.178162 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:00.178171 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:00.178235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:00.212518 1147424 cri.go:89] found id: ""
	I0731 21:33:00.212547 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.212556 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:00.212562 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:00.212626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:00.246653 1147424 cri.go:89] found id: ""
	I0731 21:33:00.246683 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.246693 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:00.246702 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:00.246795 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:00.280163 1147424 cri.go:89] found id: ""
	I0731 21:33:00.280196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.280208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:00.280216 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:00.280285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:00.313593 1147424 cri.go:89] found id: ""
	I0731 21:33:00.313622 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.313631 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:00.313637 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:00.313691 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:00.347809 1147424 cri.go:89] found id: ""
	I0731 21:33:00.347838 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.347846 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:00.347858 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:00.347870 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:00.360481 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:00.360515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:00.433834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:00.433855 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:00.433869 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:00.513679 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:00.513721 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:00.551415 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:00.551466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.101928 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:03.114183 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:03.114262 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:03.152397 1147424 cri.go:89] found id: ""
	I0731 21:33:03.152427 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.152442 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:03.152449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:03.152505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:03.186595 1147424 cri.go:89] found id: ""
	I0731 21:33:03.186626 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.186640 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:03.186647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:03.186700 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:03.219085 1147424 cri.go:89] found id: ""
	I0731 21:33:03.219116 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.219126 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:03.219135 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:03.219201 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:03.251541 1147424 cri.go:89] found id: ""
	I0731 21:33:03.251573 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.251583 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:03.251592 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:03.251660 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:03.287880 1147424 cri.go:89] found id: ""
	I0731 21:33:03.287911 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.287920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:03.287927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:03.287992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:03.320317 1147424 cri.go:89] found id: ""
	I0731 21:33:03.320352 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.320361 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:03.320367 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:03.320423 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:03.355185 1147424 cri.go:89] found id: ""
	I0731 21:33:03.355213 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.355222 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:03.355228 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:03.355281 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:03.389900 1147424 cri.go:89] found id: ""
	I0731 21:33:03.389933 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.389941 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:03.389951 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:03.389985 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:03.427299 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:03.427331 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.480994 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:03.481037 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:03.494372 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:03.494403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:03.565542 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:03.565568 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:03.565583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:01.942941 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.943391 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.140871 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:05.141254 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.764762 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.264188 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.146397 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:06.159705 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:06.159791 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:06.195594 1147424 cri.go:89] found id: ""
	I0731 21:33:06.195628 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.195640 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:06.195649 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:06.195726 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:06.230163 1147424 cri.go:89] found id: ""
	I0731 21:33:06.230216 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.230229 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:06.230239 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:06.230313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:06.266937 1147424 cri.go:89] found id: ""
	I0731 21:33:06.266968 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.266979 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:06.266986 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:06.267048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:06.299791 1147424 cri.go:89] found id: ""
	I0731 21:33:06.299828 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.299838 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:06.299849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:06.299906 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:06.333861 1147424 cri.go:89] found id: ""
	I0731 21:33:06.333900 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.333912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:06.333920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:06.333991 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:06.366156 1147424 cri.go:89] found id: ""
	I0731 21:33:06.366196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.366208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:06.366217 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:06.366292 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:06.400567 1147424 cri.go:89] found id: ""
	I0731 21:33:06.400598 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.400607 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:06.400613 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:06.400665 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:06.443745 1147424 cri.go:89] found id: ""
	I0731 21:33:06.443771 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.443782 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:06.443794 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:06.443809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:06.530140 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:06.530189 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.570842 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:06.570883 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:06.621760 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:06.621800 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:06.636562 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:06.636602 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:06.702451 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.203607 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:09.215590 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:09.215678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:09.253063 1147424 cri.go:89] found id: ""
	I0731 21:33:09.253092 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.253101 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:09.253108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:09.253159 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:09.287000 1147424 cri.go:89] found id: ""
	I0731 21:33:09.287036 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.287051 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:09.287060 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:09.287117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:09.321173 1147424 cri.go:89] found id: ""
	I0731 21:33:09.321211 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.321223 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:09.321232 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:09.321287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:09.356860 1147424 cri.go:89] found id: ""
	I0731 21:33:09.356896 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.356908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:09.356918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:09.356979 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:09.390469 1147424 cri.go:89] found id: ""
	I0731 21:33:09.390509 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.390520 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:09.390528 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:09.390601 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:09.426265 1147424 cri.go:89] found id: ""
	I0731 21:33:09.426295 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.426304 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:09.426311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:09.426376 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:09.460197 1147424 cri.go:89] found id: ""
	I0731 21:33:09.460234 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.460246 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:09.460254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:09.460313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:09.492708 1147424 cri.go:89] found id: ""
	I0731 21:33:09.492737 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.492745 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:09.492757 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:09.492769 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:09.543768 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:09.543814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:09.557496 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:09.557531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:09.622956 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.622994 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:09.623012 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:09.700157 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:09.700202 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.443888 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:08.942866 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:07.638676 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.639158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.639719 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.264932 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.763994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:12.238767 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:12.258742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:12.258829 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:12.319452 1147424 cri.go:89] found id: ""
	I0731 21:33:12.319501 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.319514 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:12.319523 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:12.319596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:12.353740 1147424 cri.go:89] found id: ""
	I0731 21:33:12.353777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.353789 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:12.353798 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:12.353872 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:12.387735 1147424 cri.go:89] found id: ""
	I0731 21:33:12.387777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.387790 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:12.387799 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:12.387864 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:12.420145 1147424 cri.go:89] found id: ""
	I0731 21:33:12.420184 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.420196 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:12.420204 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:12.420261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:12.454861 1147424 cri.go:89] found id: ""
	I0731 21:33:12.454899 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.454912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:12.454920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:12.454993 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:12.487910 1147424 cri.go:89] found id: ""
	I0731 21:33:12.487938 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.487946 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:12.487954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:12.488007 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:12.524634 1147424 cri.go:89] found id: ""
	I0731 21:33:12.524663 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.524672 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:12.524678 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:12.524747 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:12.557542 1147424 cri.go:89] found id: ""
	I0731 21:33:12.557572 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.557581 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:12.557592 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:12.557605 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:12.638725 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:12.638767 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:12.675009 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:12.675041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:12.725508 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:12.725556 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:12.739281 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:12.739315 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:12.809186 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:11.443163 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.942775 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.944913 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:14.140466 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:16.639513 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.764068 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:17.764557 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.310278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:15.323392 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:15.323489 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:15.356737 1147424 cri.go:89] found id: ""
	I0731 21:33:15.356768 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.356779 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:15.356794 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:15.356870 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:15.389979 1147424 cri.go:89] found id: ""
	I0731 21:33:15.390018 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.390027 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:15.390033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:15.390097 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:15.422777 1147424 cri.go:89] found id: ""
	I0731 21:33:15.422810 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.422818 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:15.422825 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:15.422880 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:15.457962 1147424 cri.go:89] found id: ""
	I0731 21:33:15.458000 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.458012 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:15.458021 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:15.458088 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:15.495495 1147424 cri.go:89] found id: ""
	I0731 21:33:15.495528 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.495539 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:15.495552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:15.495611 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:15.528671 1147424 cri.go:89] found id: ""
	I0731 21:33:15.528700 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.528709 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:15.528715 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:15.528782 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:15.562579 1147424 cri.go:89] found id: ""
	I0731 21:33:15.562609 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.562617 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:15.562623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:15.562688 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:15.597326 1147424 cri.go:89] found id: ""
	I0731 21:33:15.597362 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.597374 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:15.597387 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:15.597406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:15.611017 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:15.611049 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:15.679729 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:15.679756 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:15.679776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:15.763719 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:15.763764 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:15.801974 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:15.802003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.350340 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:18.362952 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:18.363030 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:18.396153 1147424 cri.go:89] found id: ""
	I0731 21:33:18.396207 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.396218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:18.396227 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:18.396300 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:18.429261 1147424 cri.go:89] found id: ""
	I0731 21:33:18.429291 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.429302 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:18.429311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:18.429386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:18.462056 1147424 cri.go:89] found id: ""
	I0731 21:33:18.462093 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.462105 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:18.462115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:18.462189 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:18.494847 1147424 cri.go:89] found id: ""
	I0731 21:33:18.494887 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.494900 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:18.494908 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:18.494974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:18.527982 1147424 cri.go:89] found id: ""
	I0731 21:33:18.528020 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.528033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:18.528041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:18.528137 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:18.562114 1147424 cri.go:89] found id: ""
	I0731 21:33:18.562148 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.562159 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:18.562168 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:18.562227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:18.600226 1147424 cri.go:89] found id: ""
	I0731 21:33:18.600256 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.600267 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:18.600275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:18.600346 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:18.635899 1147424 cri.go:89] found id: ""
	I0731 21:33:18.635935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.635947 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:18.635960 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:18.635976 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.687338 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:18.687380 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:18.700274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:18.700308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:18.772852 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:18.772882 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:18.772900 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:18.854876 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:18.854919 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:18.442684 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:20.942998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.139878 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.139917 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.764588 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.765547 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:22.759208 1147232 pod_ready.go:81] duration metric: took 4m0.00082409s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	E0731 21:33:22.759249 1147232 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:33:22.759276 1147232 pod_ready.go:38] duration metric: took 4m11.578718686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:33:22.759313 1147232 kubeadm.go:597] duration metric: took 4m19.399292481s to restartPrimaryControlPlane
	W0731 21:33:22.759429 1147232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:22.759478 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:21.392589 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:21.405646 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:21.405767 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:21.441055 1147424 cri.go:89] found id: ""
	I0731 21:33:21.441088 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.441100 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:21.441108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:21.441173 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:21.474545 1147424 cri.go:89] found id: ""
	I0731 21:33:21.474583 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.474593 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:21.474599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:21.474654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:21.506004 1147424 cri.go:89] found id: ""
	I0731 21:33:21.506032 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.506041 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:21.506047 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:21.506115 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:21.539842 1147424 cri.go:89] found id: ""
	I0731 21:33:21.539880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.539893 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:21.539902 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:21.539966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:21.573913 1147424 cri.go:89] found id: ""
	I0731 21:33:21.573943 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.573951 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:21.573958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:21.574012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:21.608677 1147424 cri.go:89] found id: ""
	I0731 21:33:21.608715 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.608727 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:21.608736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:21.608811 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:21.642032 1147424 cri.go:89] found id: ""
	I0731 21:33:21.642063 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.642073 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:21.642082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:21.642146 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:21.676279 1147424 cri.go:89] found id: ""
	I0731 21:33:21.676312 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.676322 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:21.676332 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:21.676346 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:21.688928 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:21.688981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:21.757596 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:21.757620 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:21.757637 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:21.836301 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:21.836350 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:21.873553 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:21.873594 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.427756 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:24.440917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:24.440998 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:24.475902 1147424 cri.go:89] found id: ""
	I0731 21:33:24.475935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.475946 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:24.475954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:24.476031 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:24.509078 1147424 cri.go:89] found id: ""
	I0731 21:33:24.509115 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.509128 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:24.509136 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:24.509205 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:24.542466 1147424 cri.go:89] found id: ""
	I0731 21:33:24.542506 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.542518 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:24.542527 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:24.542589 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:24.579457 1147424 cri.go:89] found id: ""
	I0731 21:33:24.579496 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.579515 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:24.579524 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:24.579596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:24.623843 1147424 cri.go:89] found id: ""
	I0731 21:33:24.623880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.623891 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:24.623899 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:24.623971 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:24.661401 1147424 cri.go:89] found id: ""
	I0731 21:33:24.661437 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.661448 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:24.661457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:24.661526 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:24.694521 1147424 cri.go:89] found id: ""
	I0731 21:33:24.694551 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.694559 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:24.694567 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:24.694657 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:24.730530 1147424 cri.go:89] found id: ""
	I0731 21:33:24.730566 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.730578 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:24.730591 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:24.730607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.801836 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:24.801890 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:24.817753 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:24.817803 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:33:23.444464 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.942484 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:23.140282 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.638870 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	W0731 21:33:24.901125 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:24.901154 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:24.901170 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:24.984008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:24.984054 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:27.533575 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:27.546174 1147424 kubeadm.go:597] duration metric: took 4m1.98040234s to restartPrimaryControlPlane
	W0731 21:33:27.546264 1147424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:27.546291 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:28.848116 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.301779163s)
	I0731 21:33:28.848201 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:28.862706 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:28.872753 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:28.882437 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:28.882467 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:28.882527 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:28.892810 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:28.892893 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:28.901944 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:28.911008 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:28.911089 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:28.920446 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.929557 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:28.929627 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.939095 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:28.948405 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:28.948478 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:28.958084 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:29.033876 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:33:29.033969 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:29.180061 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:29.180208 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:29.180304 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:29.352063 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:29.354698 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:29.354847 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:29.354944 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:29.355065 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:29.355151 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:29.355244 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:29.355344 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:29.355454 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:29.355562 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:29.355675 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:29.355800 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:29.355855 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:29.355906 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:29.657622 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:29.951029 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:30.025514 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:30.502515 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:30.518575 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:30.520148 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:30.520332 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:30.670223 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:27.948560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.442457 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:28.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.139394 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.672807 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:33:30.672945 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:30.681152 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:30.682190 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:30.683416 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:30.688543 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:33:32.942316 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.443021 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:32.639784 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.139844 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.945781 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.442632 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.639625 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.139364 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.942420 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.942739 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.139763 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.639285 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:46.943777 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.442396 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:47.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.139244 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:51.139970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.946266 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.186759545s)
	I0731 21:33:53.946372 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:53.960849 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:53.971957 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:53.981956 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:53.981997 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:53.982061 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:53.991700 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:53.991794 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:54.001558 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:54.010863 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:54.010939 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:54.021132 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.032655 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:54.032745 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.042684 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:54.052522 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:54.052591 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:54.062401 1147232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:54.110034 1147232 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:33:54.110111 1147232 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:54.241728 1147232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:54.241910 1147232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:54.242057 1147232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:54.453017 1147232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:54.454705 1147232 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:54.454822 1147232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:54.459233 1147232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:54.459344 1147232 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:54.459427 1147232 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:54.459525 1147232 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:54.459612 1147232 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:54.459698 1147232 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:54.459807 1147232 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:54.459918 1147232 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:54.460026 1147232 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:54.460083 1147232 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:54.460190 1147232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:54.524149 1147232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:54.777800 1147232 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:33:54.921782 1147232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:55.044166 1147232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:55.204096 1147232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:55.204767 1147232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:55.207263 1147232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:51.442995 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.444424 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.944751 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.639209 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.639317 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.208851 1147232 out.go:204]   - Booting up control plane ...
	I0731 21:33:55.208977 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:55.209090 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:55.209331 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:55.229113 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:55.229800 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:55.229867 1147232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:55.356937 1147232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:33:55.357076 1147232 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:33:55.858979 1147232 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.083488ms
	I0731 21:33:55.859109 1147232 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:34:00.863345 1147232 kubeadm.go:310] [api-check] The API server is healthy after 5.002699171s
	I0731 21:34:00.879484 1147232 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:34:00.894019 1147232 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:34:00.928443 1147232 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:34:00.928739 1147232 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-563652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:34:00.941793 1147232 kubeadm.go:310] [bootstrap-token] Using token: zsizu4.9crnq3d9xqkkbhr5
	I0731 21:33:57.947020 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.442694 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:57.639666 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:59.640630 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:01.640684 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.943202 1147232 out.go:204]   - Configuring RBAC rules ...
	I0731 21:34:00.943358 1147232 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:34:00.951121 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:34:00.959955 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:34:00.963669 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:34:00.967795 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:34:00.972804 1147232 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:34:01.271139 1147232 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:34:01.705953 1147232 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:34:02.269466 1147232 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:34:02.271800 1147232 kubeadm.go:310] 
	I0731 21:34:02.271904 1147232 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:34:02.271915 1147232 kubeadm.go:310] 
	I0731 21:34:02.271994 1147232 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:34:02.272005 1147232 kubeadm.go:310] 
	I0731 21:34:02.272040 1147232 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:34:02.272127 1147232 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:34:02.272206 1147232 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:34:02.272212 1147232 kubeadm.go:310] 
	I0731 21:34:02.272290 1147232 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:34:02.272337 1147232 kubeadm.go:310] 
	I0731 21:34:02.272453 1147232 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:34:02.272477 1147232 kubeadm.go:310] 
	I0731 21:34:02.272557 1147232 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:34:02.272644 1147232 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:34:02.272735 1147232 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:34:02.272751 1147232 kubeadm.go:310] 
	I0731 21:34:02.272871 1147232 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:34:02.272972 1147232 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:34:02.272991 1147232 kubeadm.go:310] 
	I0731 21:34:02.273097 1147232 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273207 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:34:02.273252 1147232 kubeadm.go:310] 	--control-plane 
	I0731 21:34:02.273268 1147232 kubeadm.go:310] 
	I0731 21:34:02.273371 1147232 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:34:02.273381 1147232 kubeadm.go:310] 
	I0731 21:34:02.273492 1147232 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273643 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:34:02.274138 1147232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:34:02.274200 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:34:02.274221 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:34:02.275876 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:34:02.277208 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:34:02.287526 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:34:02.306070 1147232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:34:02.306192 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.306218 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-563652 minikube.k8s.io/updated_at=2024_07_31T21_34_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=embed-certs-563652 minikube.k8s.io/primary=true
	I0731 21:34:02.530554 1147232 ops.go:34] apiserver oom_adj: -16
	I0731 21:34:02.530710 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.031525 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.530812 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:04.030780 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.444274 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.443668 1148013 pod_ready.go:81] duration metric: took 4m0.00729593s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:04.443701 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:04.443712 1148013 pod_ready.go:38] duration metric: took 4m3.607055366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:04.443731 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:04.443795 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:04.443885 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:04.483174 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:04.483203 1148013 cri.go:89] found id: ""
	I0731 21:34:04.483212 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:04.483265 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.488570 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:04.488660 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:04.523705 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:04.523734 1148013 cri.go:89] found id: ""
	I0731 21:34:04.523745 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:04.523816 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.528231 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:04.528304 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:04.565303 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:04.565332 1148013 cri.go:89] found id: ""
	I0731 21:34:04.565341 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:04.565394 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.570089 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:04.570172 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:04.604648 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:04.604676 1148013 cri.go:89] found id: ""
	I0731 21:34:04.604686 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:04.604770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.609219 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:04.609306 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:04.644851 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.644876 1148013 cri.go:89] found id: ""
	I0731 21:34:04.644887 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:04.644954 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.649760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:04.649859 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:04.686438 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:04.686466 1148013 cri.go:89] found id: ""
	I0731 21:34:04.686477 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:04.686546 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.690707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:04.690791 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:04.726245 1148013 cri.go:89] found id: ""
	I0731 21:34:04.726276 1148013 logs.go:276] 0 containers: []
	W0731 21:34:04.726284 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:04.726291 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:04.726346 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:04.766009 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:04.766034 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.766038 1148013 cri.go:89] found id: ""
	I0731 21:34:04.766045 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:04.766105 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.770130 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.774449 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:04.774479 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.822626 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:04.822660 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.857618 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:04.857648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:04.908962 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:04.908993 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:04.962708 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:04.962759 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:04.977232 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:04.977271 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:05.109227 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:05.109264 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:05.163213 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:05.163250 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:05.200524 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:05.200564 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:05.242464 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:05.242501 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:05.278233 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:05.278263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:05.328930 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:05.328975 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:05.367827 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:05.367860 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:04.140237 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:06.641725 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.531795 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.030854 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.530821 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.031777 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.531171 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.030885 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.531555 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.031798 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.531512 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:09.031778 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.349628 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:08.364164 1148013 api_server.go:72] duration metric: took 4m15.266433533s to wait for apiserver process to appear ...
	I0731 21:34:08.364205 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:08.364257 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:08.364321 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:08.398165 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:08.398194 1148013 cri.go:89] found id: ""
	I0731 21:34:08.398205 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:08.398270 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.402707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:08.402780 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:08.444972 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:08.444998 1148013 cri.go:89] found id: ""
	I0731 21:34:08.445007 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:08.445067 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.449385 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:08.449458 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:08.487006 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:08.487040 1148013 cri.go:89] found id: ""
	I0731 21:34:08.487053 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:08.487123 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.491544 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:08.491618 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:08.526239 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:08.526271 1148013 cri.go:89] found id: ""
	I0731 21:34:08.526282 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:08.526334 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.530760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:08.530864 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:08.579799 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:08.579829 1148013 cri.go:89] found id: ""
	I0731 21:34:08.579844 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:08.579910 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.584172 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:08.584244 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:08.624614 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.624689 1148013 cri.go:89] found id: ""
	I0731 21:34:08.624703 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:08.624770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.629264 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:08.629340 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:08.669137 1148013 cri.go:89] found id: ""
	I0731 21:34:08.669170 1148013 logs.go:276] 0 containers: []
	W0731 21:34:08.669181 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:08.669189 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:08.669256 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:08.712145 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.712174 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:08.712179 1148013 cri.go:89] found id: ""
	I0731 21:34:08.712187 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:08.712246 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.717005 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.720992 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:08.721024 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.775824 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:08.775876 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.822904 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:08.822940 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:09.279585 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:09.279641 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:09.328597 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:09.328635 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:09.382901 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:09.382959 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:09.397461 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:09.397500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:09.437452 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:09.437494 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:09.472580 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:09.472614 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:09.512902 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:09.512938 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:09.558351 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:09.558394 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:09.669960 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:09.670001 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:09.714731 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:09.714770 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:09.140243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:11.639122 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:09.531101 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.031417 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.531369 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.031687 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.530902 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.030877 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.531359 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.030850 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.530829 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.030737 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.137727 1147232 kubeadm.go:1113] duration metric: took 11.831600904s to wait for elevateKubeSystemPrivileges
	I0731 21:34:14.137775 1147232 kubeadm.go:394] duration metric: took 5m10.826279216s to StartCluster
	I0731 21:34:14.137810 1147232 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.137941 1147232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:34:14.140680 1147232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.141051 1147232 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:34:14.141091 1147232 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:34:14.141199 1147232 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-563652"
	I0731 21:34:14.141240 1147232 addons.go:69] Setting default-storageclass=true in profile "embed-certs-563652"
	I0731 21:34:14.141263 1147232 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-563652"
	W0731 21:34:14.141272 1147232 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:34:14.141291 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:34:14.141302 1147232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-563652"
	I0731 21:34:14.141309 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141337 1147232 addons.go:69] Setting metrics-server=true in profile "embed-certs-563652"
	I0731 21:34:14.141362 1147232 addons.go:234] Setting addon metrics-server=true in "embed-certs-563652"
	W0731 21:34:14.141373 1147232 addons.go:243] addon metrics-server should already be in state true
	I0731 21:34:14.141400 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141735 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141802 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141745 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141876 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141748 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.142070 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.143403 1147232 out.go:177] * Verifying Kubernetes components...
	I0731 21:34:14.144894 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:34:14.160359 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0731 21:34:14.160405 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0731 21:34:14.160631 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0731 21:34:14.160893 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.160996 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161048 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161478 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161497 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161643 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161657 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161721 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161749 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.162028 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162069 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162029 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162250 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.162557 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162596 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.162654 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162675 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.166106 1147232 addons.go:234] Setting addon default-storageclass=true in "embed-certs-563652"
	W0731 21:34:14.166129 1147232 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:34:14.166153 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.166426 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.166463 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.179941 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I0731 21:34:14.180522 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.181056 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.181077 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.181522 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.181726 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.182994 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0731 21:34:14.183599 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.183753 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.183958 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0731 21:34:14.184127 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.184145 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.184538 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.184645 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185028 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.185047 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.185306 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.185343 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.185458 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185527 1147232 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:34:14.185650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.186884 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:34:14.186912 1147232 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:34:14.186937 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.187442 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.189035 1147232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:34:14.190019 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190617 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.190644 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190680 1147232 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.190700 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:34:14.190725 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.191369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.191607 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.191893 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.192265 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.194023 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194383 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.194407 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.194852 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.195073 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.195233 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.207044 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0731 21:34:14.207748 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.208292 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.208319 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.208759 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.208962 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.210554 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.210881 1147232 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.210902 1147232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:34:14.210925 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.214212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214803 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.215026 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214918 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.216141 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.216369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.216583 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.360826 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:34:14.379220 1147232 node_ready.go:35] waiting up to 6m0s for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387294 1147232 node_ready.go:49] node "embed-certs-563652" has status "Ready":"True"
	I0731 21:34:14.387331 1147232 node_ready.go:38] duration metric: took 8.073597ms for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387344 1147232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.392589 1147232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400252 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.400276 1147232 pod_ready.go:81] duration metric: took 7.654503ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400285 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405540 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.405564 1147232 pod_ready.go:81] duration metric: took 5.273822ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405573 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410097 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.410118 1147232 pod_ready.go:81] duration metric: took 4.539492ms for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410127 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414070 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.414094 1147232 pod_ready.go:81] duration metric: took 3.961128ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414101 1147232 pod_ready.go:38] duration metric: took 26.744925ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.414117 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:14.414166 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:14.427922 1147232 api_server.go:72] duration metric: took 286.820645ms to wait for apiserver process to appear ...
	I0731 21:34:14.427955 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:14.427976 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:34:14.433697 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:34:14.435062 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:14.435088 1147232 api_server.go:131] duration metric: took 7.125728ms to wait for apiserver health ...
	I0731 21:34:14.435096 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:10.689650 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:34:10.690301 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:10.690529 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:14.497864 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.523526 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:34:14.523560 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:34:14.523656 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.552390 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:34:14.552424 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:34:14.586389 1147232 system_pods.go:59] 4 kube-system pods found
	I0731 21:34:14.586421 1147232 system_pods.go:61] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.586426 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.586429 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.586433 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.586440 1147232 system_pods.go:74] duration metric: took 151.337561ms to wait for pod list to return data ...
	I0731 21:34:14.586448 1147232 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:14.613255 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.613292 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:34:14.677966 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.728484 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.728522 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.728906 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.728971 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.728992 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729005 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.729016 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.729280 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.729300 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729315 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736315 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.736340 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.736605 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736611 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.736628 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.783127 1147232 default_sa.go:45] found service account: "default"
	I0731 21:34:14.783169 1147232 default_sa.go:55] duration metric: took 196.713133ms for default service account to be created ...
	I0731 21:34:14.783181 1147232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:14.998421 1147232 system_pods.go:86] 5 kube-system pods found
	I0731 21:34:14.998459 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.998467 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.998476 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.998483 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending
	I0731 21:34:14.998488 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.998528 1147232 retry.go:31] will retry after 204.720881ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.239227 1147232 system_pods.go:86] 7 kube-system pods found
	I0731 21:34:15.239260 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending
	I0731 21:34:15.239268 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.239275 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.239281 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.239285 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.239291 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.239295 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.239316 1147232 retry.go:31] will retry after 274.032375ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.470562 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.470596 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.470970 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471046 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471059 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.471070 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.471082 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.471345 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471384 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471395 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.530409 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.530454 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530467 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530475 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.530483 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.530493 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.530501 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.530510 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.530540 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.530554 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.530575 1147232 retry.go:31] will retry after 306.456007ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.572796 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.572829 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573170 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573210 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573232 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.573245 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573542 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.573591 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573612 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573631 1147232 addons.go:475] Verifying addon metrics-server=true in "embed-certs-563652"
	I0731 21:34:15.576124 1147232 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0731 21:34:12.254258 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:34:12.259093 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:34:12.260261 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:12.260290 1148013 api_server.go:131] duration metric: took 3.896077544s to wait for apiserver health ...
	I0731 21:34:12.260299 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:12.260325 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:12.260383 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:12.302317 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.302350 1148013 cri.go:89] found id: ""
	I0731 21:34:12.302361 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:12.302435 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.306733 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:12.306821 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:12.342694 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:12.342719 1148013 cri.go:89] found id: ""
	I0731 21:34:12.342728 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:12.342788 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.346762 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:12.346848 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:12.382747 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.382772 1148013 cri.go:89] found id: ""
	I0731 21:34:12.382782 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:12.382851 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.386891 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:12.386988 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:12.424735 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.424768 1148013 cri.go:89] found id: ""
	I0731 21:34:12.424777 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:12.424842 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.430109 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:12.430193 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:12.466432 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.466457 1148013 cri.go:89] found id: ""
	I0731 21:34:12.466464 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:12.466520 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.470677 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:12.470761 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:12.509821 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:12.509847 1148013 cri.go:89] found id: ""
	I0731 21:34:12.509858 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:12.509926 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.514114 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:12.514199 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:12.560780 1148013 cri.go:89] found id: ""
	I0731 21:34:12.560810 1148013 logs.go:276] 0 containers: []
	W0731 21:34:12.560831 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:12.560841 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:12.560911 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:12.611528 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:12.611560 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.611566 1148013 cri.go:89] found id: ""
	I0731 21:34:12.611575 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:12.611643 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.615972 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.620046 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:12.620072 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:12.733715 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:12.733761 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.785864 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:12.785915 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.829467 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:12.829510 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.867566 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:12.867599 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.908038 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:12.908073 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.945425 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:12.945471 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:12.994911 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:12.994948 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:13.061451 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:13.061500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:13.107896 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:13.107947 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:13.164585 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:13.164627 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:13.206615 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:13.206648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:13.587405 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:13.587453 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:16.108951 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:16.108985 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.108990 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.108994 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.108998 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.109001 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.109004 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.109010 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.109015 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.109023 1148013 system_pods.go:74] duration metric: took 3.848717497s to wait for pod list to return data ...
	I0731 21:34:16.109031 1148013 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:16.112076 1148013 default_sa.go:45] found service account: "default"
	I0731 21:34:16.112124 1148013 default_sa.go:55] duration metric: took 3.083038ms for default service account to be created ...
	I0731 21:34:16.112135 1148013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:16.118191 1148013 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:16.118232 1148013 system_pods.go:89] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.118242 1148013 system_pods.go:89] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.118250 1148013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.118256 1148013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.118263 1148013 system_pods.go:89] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.118269 1148013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.118303 1148013 system_pods.go:89] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.118321 1148013 system_pods.go:89] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.118333 1148013 system_pods.go:126] duration metric: took 6.190349ms to wait for k8s-apps to be running ...
	I0731 21:34:16.118344 1148013 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.118404 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.137723 1148013 system_svc.go:56] duration metric: took 19.365234ms WaitForService to wait for kubelet
	I0731 21:34:16.137753 1148013 kubeadm.go:582] duration metric: took 4m23.040028763s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.137781 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.141708 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.141737 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.141805 1148013 node_conditions.go:105] duration metric: took 4.017229ms to run NodePressure ...
	I0731 21:34:16.141831 1148013 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.141849 1148013 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.141868 1148013 start.go:255] writing updated cluster config ...
	I0731 21:34:16.142163 1148013 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.203520 1148013 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.205072 1148013 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-755535" cluster and "default" namespace by default
	I0731 21:34:13.639431 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.640300 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.577285 1147232 addons.go:510] duration metric: took 1.436190545s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0731 21:34:15.848446 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.848480 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848487 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848496 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.848502 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.848507 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.848512 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.848516 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.848522 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.848527 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.848545 1147232 retry.go:31] will retry after 538.9255ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.397869 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.397924 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397937 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397946 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.397954 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.397962 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.397972 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:16.397979 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.397989 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.398003 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:16.398152 1147232 retry.go:31] will retry after 511.77725ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.917181 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.917219 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Running
	I0731 21:34:16.917228 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Running
	I0731 21:34:16.917234 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.917240 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.917248 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.917256 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Running
	I0731 21:34:16.917261 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.917272 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.917279 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Running
	I0731 21:34:16.917295 1147232 system_pods.go:126] duration metric: took 2.134102549s to wait for k8s-apps to be running ...
	I0731 21:34:16.917310 1147232 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.917365 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.932647 1147232 system_svc.go:56] duration metric: took 15.322111ms WaitForService to wait for kubelet
	I0731 21:34:16.932702 1147232 kubeadm.go:582] duration metric: took 2.791596331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.932730 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.935567 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.935589 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.935600 1147232 node_conditions.go:105] duration metric: took 2.864432ms to run NodePressure ...
	I0731 21:34:16.935614 1147232 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.935621 1147232 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.935631 1147232 start.go:255] writing updated cluster config ...
	I0731 21:34:16.935948 1147232 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.990670 1147232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.992682 1147232 out.go:177] * Done! kubectl is now configured to use "embed-certs-563652" cluster and "default" namespace by default
	I0731 21:34:15.690878 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:15.691156 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:18.139818 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:20.639113 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:23.140314 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.641086 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.691455 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:25.691639 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:28.139044 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:30.140499 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:32.640931 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:35.139207 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:36.640291 1146656 pod_ready.go:81] duration metric: took 4m0.007535985s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:36.640323 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:36.640334 1146656 pod_ready.go:38] duration metric: took 4m7.419160814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:36.640354 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:36.640393 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:36.640454 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:36.688629 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:36.688658 1146656 cri.go:89] found id: ""
	I0731 21:34:36.688668 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:36.688747 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.693261 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:36.693349 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:36.730997 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:36.731021 1146656 cri.go:89] found id: ""
	I0731 21:34:36.731028 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:36.731079 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.737624 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:36.737692 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:36.780734 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:36.780758 1146656 cri.go:89] found id: ""
	I0731 21:34:36.780769 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:36.780831 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.784767 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:36.784839 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:36.824129 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:36.824164 1146656 cri.go:89] found id: ""
	I0731 21:34:36.824174 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:36.824246 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.828299 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:36.828380 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:36.863976 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:36.864008 1146656 cri.go:89] found id: ""
	I0731 21:34:36.864017 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:36.864081 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.868516 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:36.868594 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:36.903106 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:36.903137 1146656 cri.go:89] found id: ""
	I0731 21:34:36.903148 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:36.903212 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.907260 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:36.907327 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:36.943921 1146656 cri.go:89] found id: ""
	I0731 21:34:36.943955 1146656 logs.go:276] 0 containers: []
	W0731 21:34:36.943963 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:36.943969 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:36.944025 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:36.979295 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:36.979327 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:36.979334 1146656 cri.go:89] found id: ""
	I0731 21:34:36.979345 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:36.979403 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.984464 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.988471 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:36.988511 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:37.121952 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:37.121995 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:37.169494 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:37.169546 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:37.205544 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:37.205577 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:37.255892 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:37.255930 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:37.292002 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:37.292036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:37.327852 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:37.327881 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:37.367753 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:37.367795 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:37.419399 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:37.419443 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:37.432894 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:37.432938 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:37.474408 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:37.474454 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:37.508203 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:37.508246 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:37.550030 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:37.550072 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:40.551728 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:40.566959 1146656 api_server.go:72] duration metric: took 4m19.080511832s to wait for apiserver process to appear ...
	I0731 21:34:40.567027 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:40.567085 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:40.567153 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:40.617492 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:40.617529 1146656 cri.go:89] found id: ""
	I0731 21:34:40.617539 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:40.617605 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.621950 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:40.622023 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:40.664964 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:40.664990 1146656 cri.go:89] found id: ""
	I0731 21:34:40.664998 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:40.665052 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.669257 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:40.669353 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:40.705806 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:40.705842 1146656 cri.go:89] found id: ""
	I0731 21:34:40.705854 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:40.705920 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.710069 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:40.710146 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:40.746331 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:40.746358 1146656 cri.go:89] found id: ""
	I0731 21:34:40.746368 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:40.746420 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.754270 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:40.754364 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:40.791320 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:40.791356 1146656 cri.go:89] found id: ""
	I0731 21:34:40.791367 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:40.791435 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.795691 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:40.795773 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:40.835548 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:40.835578 1146656 cri.go:89] found id: ""
	I0731 21:34:40.835589 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:40.835652 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.839854 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:40.839939 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:40.874322 1146656 cri.go:89] found id: ""
	I0731 21:34:40.874358 1146656 logs.go:276] 0 containers: []
	W0731 21:34:40.874369 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:40.874379 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:40.874448 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:40.922665 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:40.922691 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.922695 1146656 cri.go:89] found id: ""
	I0731 21:34:40.922703 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:40.922762 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.926750 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.930612 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:40.930640 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.966656 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:40.966695 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:41.401560 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:41.401622 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:41.503991 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:41.504036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:41.552765 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:41.552816 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:41.588315 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:41.588353 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:41.639790 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:41.639832 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:41.679851 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:41.679891 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:41.716182 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:41.716219 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:41.762445 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:41.762493 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:41.815762 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:41.815810 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:41.829753 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:41.829794 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:41.874703 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:41.874745 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.415559 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:34:44.420498 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:34:44.421648 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:34:44.421678 1146656 api_server.go:131] duration metric: took 3.854640091s to wait for apiserver health ...
	I0731 21:34:44.421690 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:44.421724 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:44.421786 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:44.456716 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.456744 1146656 cri.go:89] found id: ""
	I0731 21:34:44.456755 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:44.456809 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.460762 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:44.460836 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:44.498325 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.498352 1146656 cri.go:89] found id: ""
	I0731 21:34:44.498361 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:44.498416 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.502344 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:44.502424 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:44.538766 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.538799 1146656 cri.go:89] found id: ""
	I0731 21:34:44.538809 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:44.538874 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.542853 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:44.542946 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:44.578142 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:44.578175 1146656 cri.go:89] found id: ""
	I0731 21:34:44.578185 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:44.578241 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.582494 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:44.582574 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:44.631110 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:44.631141 1146656 cri.go:89] found id: ""
	I0731 21:34:44.631149 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:44.631208 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.635618 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:44.635693 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:44.669607 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:44.669633 1146656 cri.go:89] found id: ""
	I0731 21:34:44.669643 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:44.669702 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.673967 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:44.674052 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:44.723388 1146656 cri.go:89] found id: ""
	I0731 21:34:44.723417 1146656 logs.go:276] 0 containers: []
	W0731 21:34:44.723426 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:44.723433 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:44.723485 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:44.759398 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:44.759423 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:44.759429 1146656 cri.go:89] found id: ""
	I0731 21:34:44.759438 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:44.759506 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.765787 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.769602 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:44.769627 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:44.783608 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:44.783646 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:44.897376 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:44.897415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.941518 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:44.941558 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.976285 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:44.976319 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:45.015310 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:45.015343 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:45.076253 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:45.076298 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:45.114621 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:45.114656 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:45.171369 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:45.171415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:45.219450 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:45.219492 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:45.254864 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:45.254901 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:45.289962 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:45.289999 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:45.660050 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:45.660113 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:48.211383 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:48.211418 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.211423 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.211427 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.211431 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.211435 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.211440 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.211449 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.211456 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.211467 1146656 system_pods.go:74] duration metric: took 3.789769058s to wait for pod list to return data ...
	I0731 21:34:48.211490 1146656 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:48.214462 1146656 default_sa.go:45] found service account: "default"
	I0731 21:34:48.214492 1146656 default_sa.go:55] duration metric: took 2.992385ms for default service account to be created ...
	I0731 21:34:48.214501 1146656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:48.220257 1146656 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:48.220289 1146656 system_pods.go:89] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.220295 1146656 system_pods.go:89] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.220299 1146656 system_pods.go:89] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.220304 1146656 system_pods.go:89] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.220309 1146656 system_pods.go:89] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.220313 1146656 system_pods.go:89] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.220322 1146656 system_pods.go:89] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.220328 1146656 system_pods.go:89] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.220339 1146656 system_pods.go:126] duration metric: took 5.831037ms to wait for k8s-apps to be running ...
	I0731 21:34:48.220352 1146656 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:48.220404 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:48.235707 1146656 system_svc.go:56] duration metric: took 15.341391ms WaitForService to wait for kubelet
	I0731 21:34:48.235747 1146656 kubeadm.go:582] duration metric: took 4m26.749308267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:48.235769 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:48.239352 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:48.239377 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:48.239388 1146656 node_conditions.go:105] duration metric: took 3.614275ms to run NodePressure ...
	I0731 21:34:48.239400 1146656 start.go:241] waiting for startup goroutines ...
	I0731 21:34:48.239407 1146656 start.go:246] waiting for cluster config update ...
	I0731 21:34:48.239418 1146656 start.go:255] writing updated cluster config ...
	I0731 21:34:48.239724 1146656 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:48.291567 1146656 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:34:48.293377 1146656 out.go:177] * Done! kubectl is now configured to use "no-preload-018891" cluster and "default" namespace by default
	I0731 21:34:45.692895 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:45.693194 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695071 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:35:25.695336 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695369 1147424 kubeadm.go:310] 
	I0731 21:35:25.695432 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:35:25.695496 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:35:25.695506 1147424 kubeadm.go:310] 
	I0731 21:35:25.695560 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:35:25.695606 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:35:25.695752 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:35:25.695775 1147424 kubeadm.go:310] 
	I0731 21:35:25.695866 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:35:25.695914 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:35:25.695965 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:35:25.695972 1147424 kubeadm.go:310] 
	I0731 21:35:25.696064 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:35:25.696197 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:35:25.696218 1147424 kubeadm.go:310] 
	I0731 21:35:25.696389 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:35:25.696510 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:35:25.696637 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:35:25.696739 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:35:25.696761 1147424 kubeadm.go:310] 
	I0731 21:35:25.697342 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:35:25.697447 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:35:25.697582 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:35:25.697782 1147424 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:35:25.697852 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:35:31.094319 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.396429611s)
	I0731 21:35:31.094410 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:35:31.109019 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:35:31.118415 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:35:31.118447 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:35:31.118512 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:35:31.129005 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:35:31.129097 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:35:31.139701 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:35:31.149483 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:35:31.149565 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:35:31.158699 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.168151 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:35:31.168225 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.177911 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:35:31.186739 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:35:31.186821 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:35:31.196779 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:35:31.410613 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:37:27.101986 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:37:27.102135 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:37:27.103680 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:37:27.103742 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:37:27.103874 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:37:27.103971 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:37:27.104056 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:37:27.104135 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:37:27.105757 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:37:27.105851 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:37:27.105911 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:37:27.105982 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:37:27.106047 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:37:27.106126 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:37:27.106185 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:37:27.106256 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:37:27.106340 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:37:27.106446 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:37:27.106527 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:37:27.106582 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:37:27.106669 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:37:27.106747 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:37:27.106800 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:37:27.106853 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:37:27.106928 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:37:27.107053 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:37:27.107169 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:37:27.107233 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:37:27.107307 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:37:27.108810 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:37:27.108897 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:37:27.108964 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:37:27.109022 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:37:27.109090 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:37:27.109227 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:37:27.109276 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:37:27.109346 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109569 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109655 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109876 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109947 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110108 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110172 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110334 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110393 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110549 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110556 1147424 kubeadm.go:310] 
	I0731 21:37:27.110589 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:37:27.110626 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:37:27.110632 1147424 kubeadm.go:310] 
	I0731 21:37:27.110661 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:37:27.110707 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:37:27.110804 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:37:27.110816 1147424 kubeadm.go:310] 
	I0731 21:37:27.110920 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:37:27.110965 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:37:27.110999 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:37:27.111006 1147424 kubeadm.go:310] 
	I0731 21:37:27.111099 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:37:27.111173 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:37:27.111181 1147424 kubeadm.go:310] 
	I0731 21:37:27.111284 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:37:27.111357 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:37:27.111421 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:37:27.111501 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:37:27.111545 1147424 kubeadm.go:310] 
	I0731 21:37:27.111591 1147424 kubeadm.go:394] duration metric: took 8m1.593977042s to StartCluster
	I0731 21:37:27.111642 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:37:27.111732 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:37:27.151036 1147424 cri.go:89] found id: ""
	I0731 21:37:27.151080 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.151092 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:37:27.151101 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:37:27.151164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:37:27.189839 1147424 cri.go:89] found id: ""
	I0731 21:37:27.189877 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.189897 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:37:27.189906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:37:27.189975 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:37:27.224515 1147424 cri.go:89] found id: ""
	I0731 21:37:27.224553 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.224566 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:37:27.224574 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:37:27.224637 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:37:27.256890 1147424 cri.go:89] found id: ""
	I0731 21:37:27.256927 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.256939 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:37:27.256948 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:37:27.257017 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:37:27.292320 1147424 cri.go:89] found id: ""
	I0731 21:37:27.292360 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.292373 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:37:27.292380 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:37:27.292448 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:37:27.327537 1147424 cri.go:89] found id: ""
	I0731 21:37:27.327580 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.327591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:37:27.327600 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:37:27.327669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:37:27.362489 1147424 cri.go:89] found id: ""
	I0731 21:37:27.362522 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.362533 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:37:27.362541 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:37:27.362612 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:37:27.398531 1147424 cri.go:89] found id: ""
	I0731 21:37:27.398575 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.398587 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:37:27.398605 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:37:27.398625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:37:27.412082 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:37:27.412129 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:37:27.485574 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:37:27.485598 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:37:27.485615 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:37:27.602979 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:37:27.603026 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:37:27.642075 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:37:27.642108 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 21:37:27.692811 1147424 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:37:27.692868 1147424 out.go:239] * 
	W0731 21:37:27.692944 1147424 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.692968 1147424 out.go:239] * 
	W0731 21:37:27.693763 1147424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:37:27.697049 1147424 out.go:177] 
	W0731 21:37:27.698454 1147424 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.698525 1147424 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:37:27.698564 1147424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:37:27.700008 1147424 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.702781142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722461849702759589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=028d0b63-9658-4518-8e49-c754bd5f2991 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.703433263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02d43a2a-250b-4d9f-a924-b52ace425b9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.703500836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02d43a2a-250b-4d9f-a924-b52ace425b9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.703535568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=02d43a2a-250b-4d9f-a924-b52ace425b9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.735907648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aee86638-1be9-4753-9301-314381120814 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.735992808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aee86638-1be9-4753-9301-314381120814 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.736853334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8407b790-4cd6-4247-83c8-92152ab1c44d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.737237833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722461849737197625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8407b790-4cd6-4247-83c8-92152ab1c44d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.737806576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10a7119a-267b-4cbe-865f-43ced6e7c856 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.737871114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10a7119a-267b-4cbe-865f-43ced6e7c856 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.737902438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=10a7119a-267b-4cbe-865f-43ced6e7c856 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.769659121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76f91b4d-e44c-45e6-9b37-f28a97a6da39 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.769785214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76f91b4d-e44c-45e6-9b37-f28a97a6da39 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.771142796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc269ec3-f224-4cc3-a1f1-e5496ae70ca6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.771499882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722461849771480519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc269ec3-f224-4cc3-a1f1-e5496ae70ca6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.772159382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63c0b929-6326-4afe-b9a4-01d73882481f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.772227373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63c0b929-6326-4afe-b9a4-01d73882481f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.772259121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=63c0b929-6326-4afe-b9a4-01d73882481f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.802585659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8885bac-ff1a-4a85-8c13-b850b67c3c1c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.802674472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8885bac-ff1a-4a85-8c13-b850b67c3c1c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.803977570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57a4eab2-c8c1-45f7-9266-ef00444ea8a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.804403365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722461849804379235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57a4eab2-c8c1-45f7-9266-ef00444ea8a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.804905581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba20d966-e194-474e-a24c-6c7996989e36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.804974108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba20d966-e194-474e-a24c-6c7996989e36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:37:29 old-k8s-version-275462 crio[640]: time="2024-07-31 21:37:29.805009334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ba20d966-e194-474e-a24c-6c7996989e36 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 21:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048242] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037912] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.873982] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.920716] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.346172] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.912930] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.065585] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061848] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.166323] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.160547] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.289426] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.100697] systemd-fstab-generator[825]: Ignoring "noauto" option for root device
	[  +0.056106] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.885879] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[ +12.535811] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 21:33] systemd-fstab-generator[4947]: Ignoring "noauto" option for root device
	[Jul31 21:35] systemd-fstab-generator[5234]: Ignoring "noauto" option for root device
	[  +0.067044] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:37:29 up 8 min,  0 users,  load average: 0.05, 0.09, 0.06
	Linux old-k8s-version-275462 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000dc60a0, 0xc000d99a10, 0x23, 0xc000d92cc0)
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]: created by internal/singleflight.(*Group).DoChan
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]: goroutine 168 [runnable]:
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]: net._C2func_getaddrinfo(0xc000db4320, 0x0, 0xc000dbcd20, 0xc000db0148, 0x0, 0x0, 0x0)
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]:         _cgo_gotypes.go:94 +0x55
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]: net.cgoLookupIPCNAME.func1(0xc000db4320, 0x20, 0x20, 0xc000dbcd20, 0xc000db0148, 0x95, 0xc00003cea0, 0x57a492)
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000d999e0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]: net.cgoIPLookup(0xc000d55560, 0x48ab5d6, 0x3, 0xc000d999e0, 0x1f)
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]: created by net.cgoLookupIP
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5416]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jul 31 21:37:27 old-k8s-version-275462 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 21:37:27 old-k8s-version-275462 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 21:37:27 old-k8s-version-275462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 31 21:37:27 old-k8s-version-275462 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 21:37:27 old-k8s-version-275462 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5482]: I0731 21:37:27.812166    5482 server.go:416] Version: v1.20.0
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5482]: I0731 21:37:27.812466    5482 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5482]: I0731 21:37:27.814930    5482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5482]: I0731 21:37:27.816647    5482 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 31 21:37:27 old-k8s-version-275462 kubelet[5482]: W0731 21:37:27.816962    5482 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 2 (237.121613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-275462" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (736.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535: exit status 3 (3.167555606s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:27:16.768466 1147903 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host
	E0731 21:27:16.768490 1147903 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-755535 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-755535 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15521897s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-755535 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535: exit status 3 (3.060528106s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 21:27:25.984535 1147967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host
	E0731 21:27:25.984558 1147967 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.145:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-755535" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:43:16.766586271 +0000 UTC m=+5637.925017389
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-755535 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-755535 logs -n 25: (2.436539618s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-238338                              | cert-expiration-238338       | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-018891             | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-018891                                   | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-563652            | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275462        | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-318420 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | disable-driver-mounts-318420                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:24 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-018891                  | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-018891 --memory=2200                     | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:34 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-755535  | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC | 31 Jul 24 21:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-563652                 | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275462             | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-755535       | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC | 31 Jul 24 21:34 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:27:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:27:26.030260 1148013 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:27:26.030388 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030397 1148013 out.go:304] Setting ErrFile to fd 2...
	I0731 21:27:26.030401 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030608 1148013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:27:26.031249 1148013 out.go:298] Setting JSON to false
	I0731 21:27:26.032356 1148013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18597,"bootTime":1722442649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:27:26.032418 1148013 start.go:139] virtualization: kvm guest
	I0731 21:27:26.034938 1148013 out.go:177] * [default-k8s-diff-port-755535] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:27:26.036482 1148013 notify.go:220] Checking for updates...
	I0731 21:27:26.036489 1148013 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:27:26.038147 1148013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:27:26.039588 1148013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:27:26.040948 1148013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:27:26.042283 1148013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:27:26.043447 1148013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:27:26.045210 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:27:26.045675 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.045758 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.061309 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0731 21:27:26.061780 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.062491 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.062533 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.062921 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.063189 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.063482 1148013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:27:26.063794 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.063834 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.079162 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0731 21:27:26.079645 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.080157 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.080183 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.080542 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.080745 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.118664 1148013 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:27:26.120036 1148013 start.go:297] selected driver: kvm2
	I0731 21:27:26.120101 1148013 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.120220 1148013 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:27:26.120963 1148013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.121063 1148013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:27:26.137571 1148013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:27:26.137997 1148013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:27:26.138052 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:27:26.138065 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:27:26.138143 1148013 start.go:340] cluster config:
	{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.138260 1148013 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.140210 1148013 out.go:177] * Starting "default-k8s-diff-port-755535" primary control-plane node in "default-k8s-diff-port-755535" cluster
	I0731 21:27:26.141439 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:27:26.141487 1148013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:27:26.141498 1148013 cache.go:56] Caching tarball of preloaded images
	I0731 21:27:26.141586 1148013 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:27:26.141597 1148013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:27:26.141693 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:27:26.141896 1148013 start.go:360] acquireMachinesLock for default-k8s-diff-port-755535: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:27:27.008495 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:30.080584 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:36.160478 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:39.232498 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:45.312414 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:48.384471 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:54.464384 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:57.536420 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:03.616434 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:06.688387 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:12.768424 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:15.840395 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:21.920383 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:24.992412 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:31.072430 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:34.144440 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:37.147856 1147232 start.go:364] duration metric: took 3m32.571011548s to acquireMachinesLock for "embed-certs-563652"
	I0731 21:28:37.147925 1147232 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:37.147931 1147232 fix.go:54] fixHost starting: 
	I0731 21:28:37.148287 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:37.148321 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:37.164497 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0731 21:28:37.164970 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:37.165488 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:28:37.165514 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:37.165980 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:37.166236 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:37.166440 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:28:37.168379 1147232 fix.go:112] recreateIfNeeded on embed-certs-563652: state=Stopped err=<nil>
	I0731 21:28:37.168407 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	W0731 21:28:37.168605 1147232 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:37.170589 1147232 out.go:177] * Restarting existing kvm2 VM for "embed-certs-563652" ...
	I0731 21:28:37.171953 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Start
	I0731 21:28:37.172181 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring networks are active...
	I0731 21:28:37.173124 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network default is active
	I0731 21:28:37.173407 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network mk-embed-certs-563652 is active
	I0731 21:28:37.173963 1147232 main.go:141] libmachine: (embed-certs-563652) Getting domain xml...
	I0731 21:28:37.174662 1147232 main.go:141] libmachine: (embed-certs-563652) Creating domain...
	I0731 21:28:38.412401 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting to get IP...
	I0731 21:28:38.413198 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.413705 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.413848 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.413679 1148299 retry.go:31] will retry after 259.485128ms: waiting for machine to come up
	I0731 21:28:38.675408 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.675997 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.676020 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.675947 1148299 retry.go:31] will retry after 335.618163ms: waiting for machine to come up
	I0731 21:28:39.013788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.014375 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.014410 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.014338 1148299 retry.go:31] will retry after 367.833515ms: waiting for machine to come up
	I0731 21:28:39.383927 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.384304 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.384330 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.384282 1148299 retry.go:31] will retry after 399.641643ms: waiting for machine to come up
	I0731 21:28:37.145377 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:37.145426 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.145841 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:28:37.145876 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.146110 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:28:37.147660 1146656 machine.go:97] duration metric: took 4m34.558419201s to provisionDockerMachine
	I0731 21:28:37.147745 1146656 fix.go:56] duration metric: took 4m34.586940428s for fixHost
	I0731 21:28:37.147761 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 4m34.586994448s
	W0731 21:28:37.147782 1146656 start.go:714] error starting host: provision: host is not running
	W0731 21:28:37.147896 1146656 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 21:28:37.147905 1146656 start.go:729] Will try again in 5 seconds ...
	I0731 21:28:39.785994 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.786532 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.786564 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.786477 1148299 retry.go:31] will retry after 734.925372ms: waiting for machine to come up
	I0731 21:28:40.523580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:40.523946 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:40.523976 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:40.523897 1148299 retry.go:31] will retry after 588.684081ms: waiting for machine to come up
	I0731 21:28:41.113730 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:41.114237 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:41.114269 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:41.114163 1148299 retry.go:31] will retry after 937.611465ms: waiting for machine to come up
	I0731 21:28:42.053276 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:42.053607 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:42.053631 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:42.053567 1148299 retry.go:31] will retry after 1.025772158s: waiting for machine to come up
	I0731 21:28:43.081306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:43.081710 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:43.081739 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:43.081649 1148299 retry.go:31] will retry after 1.677045484s: waiting for machine to come up
	I0731 21:28:42.148804 1146656 start.go:360] acquireMachinesLock for no-preload-018891: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:28:44.761328 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:44.761956 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:44.761982 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:44.761903 1148299 retry.go:31] will retry after 2.317638211s: waiting for machine to come up
	I0731 21:28:47.081357 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:47.081798 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:47.081821 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:47.081742 1148299 retry.go:31] will retry after 2.614024076s: waiting for machine to come up
	I0731 21:28:49.697308 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:49.697764 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:49.697788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:49.697724 1148299 retry.go:31] will retry after 2.673090887s: waiting for machine to come up
	I0731 21:28:52.372166 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:52.372536 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:52.372567 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:52.372480 1148299 retry.go:31] will retry after 3.507450288s: waiting for machine to come up
	I0731 21:28:57.157052 1147424 start.go:364] duration metric: took 3m42.182815583s to acquireMachinesLock for "old-k8s-version-275462"
	I0731 21:28:57.157149 1147424 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:57.157159 1147424 fix.go:54] fixHost starting: 
	I0731 21:28:57.157580 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:57.157635 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:57.177971 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 21:28:57.178444 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:57.179070 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:28:57.179105 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:57.179414 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:57.179640 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:28:57.179803 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetState
	I0731 21:28:57.181518 1147424 fix.go:112] recreateIfNeeded on old-k8s-version-275462: state=Stopped err=<nil>
	I0731 21:28:57.181566 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	W0731 21:28:57.181776 1147424 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:57.184336 1147424 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275462" ...
	I0731 21:28:55.884290 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.884864 1147232 main.go:141] libmachine: (embed-certs-563652) Found IP for machine: 192.168.50.203
	I0731 21:28:55.884893 1147232 main.go:141] libmachine: (embed-certs-563652) Reserving static IP address...
	I0731 21:28:55.884911 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has current primary IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.885425 1147232 main.go:141] libmachine: (embed-certs-563652) Reserved static IP address: 192.168.50.203
	I0731 21:28:55.885445 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting for SSH to be available...
	I0731 21:28:55.885479 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.885500 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | skip adding static IP to network mk-embed-certs-563652 - found existing host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"}
	I0731 21:28:55.885515 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Getting to WaitForSSH function...
	I0731 21:28:55.887696 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888052 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.888109 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888279 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH client type: external
	I0731 21:28:55.888310 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa (-rw-------)
	I0731 21:28:55.888353 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:28:55.888371 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | About to run SSH command:
	I0731 21:28:55.888387 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | exit 0
	I0731 21:28:56.012306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | SSH cmd err, output: <nil>: 
	I0731 21:28:56.012807 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetConfigRaw
	I0731 21:28:56.013549 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.016243 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.016629 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016925 1147232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/config.json ...
	I0731 21:28:56.017152 1147232 machine.go:94] provisionDockerMachine start ...
	I0731 21:28:56.017173 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.017431 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.019693 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020075 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.020124 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020296 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.020489 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020606 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020705 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.020835 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.021131 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.021143 1147232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:28:56.120421 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:28:56.120455 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.120874 1147232 buildroot.go:166] provisioning hostname "embed-certs-563652"
	I0731 21:28:56.120911 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.121185 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.124050 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124509 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.124548 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124693 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.124936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125120 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125300 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.125456 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.125645 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.125660 1147232 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-563652 && echo "embed-certs-563652" | sudo tee /etc/hostname
	I0731 21:28:56.237674 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-563652
	
	I0731 21:28:56.237709 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.240783 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.241212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241460 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.241660 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.241850 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.242009 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.242230 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.242458 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.242479 1147232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-563652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-563652/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-563652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:28:56.353104 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:56.353138 1147232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:28:56.353165 1147232 buildroot.go:174] setting up certificates
	I0731 21:28:56.353180 1147232 provision.go:84] configureAuth start
	I0731 21:28:56.353193 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.353590 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.356346 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356736 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.356767 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356921 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.359016 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359319 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.359364 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359530 1147232 provision.go:143] copyHostCerts
	I0731 21:28:56.359595 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:28:56.359605 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:28:56.359674 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:28:56.359763 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:28:56.359772 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:28:56.359795 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:28:56.359858 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:28:56.359864 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:28:56.359886 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:28:56.359961 1147232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.embed-certs-563652 san=[127.0.0.1 192.168.50.203 embed-certs-563652 localhost minikube]
	I0731 21:28:56.517263 1147232 provision.go:177] copyRemoteCerts
	I0731 21:28:56.517324 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:28:56.517355 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.519965 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520292 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.520326 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520523 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.520745 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.520956 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.521090 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:56.602671 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:28:56.626882 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:28:56.651212 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:28:56.674469 1147232 provision.go:87] duration metric: took 321.274463ms to configureAuth
	I0731 21:28:56.674505 1147232 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:28:56.674734 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:28:56.674830 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.677835 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.678215 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678375 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.678563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678741 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678898 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.679075 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.679259 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.679275 1147232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:28:56.930106 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:28:56.930136 1147232 machine.go:97] duration metric: took 912.97079ms to provisionDockerMachine
	I0731 21:28:56.930148 1147232 start.go:293] postStartSetup for "embed-certs-563652" (driver="kvm2")
	I0731 21:28:56.930159 1147232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:28:56.930177 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.930534 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:28:56.930563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.933241 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933656 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.933689 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933795 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.934062 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.934228 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.934372 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.015059 1147232 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:28:57.019339 1147232 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:28:57.019376 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:28:57.019472 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:28:57.019581 1147232 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:28:57.019680 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:28:57.029381 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:28:57.052530 1147232 start.go:296] duration metric: took 122.364505ms for postStartSetup
	I0731 21:28:57.052583 1147232 fix.go:56] duration metric: took 19.904651181s for fixHost
	I0731 21:28:57.052612 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.055423 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.055802 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.055852 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.056142 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.056343 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056494 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056668 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.056844 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:57.057017 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:57.057028 1147232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:28:57.156776 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461337.115873615
	
	I0731 21:28:57.156816 1147232 fix.go:216] guest clock: 1722461337.115873615
	I0731 21:28:57.156847 1147232 fix.go:229] Guest: 2024-07-31 21:28:57.115873615 +0000 UTC Remote: 2024-07-31 21:28:57.05258776 +0000 UTC m=+232.627404404 (delta=63.285855ms)
	I0731 21:28:57.156883 1147232 fix.go:200] guest clock delta is within tolerance: 63.285855ms
	I0731 21:28:57.156901 1147232 start.go:83] releasing machines lock for "embed-certs-563652", held for 20.008989513s
	I0731 21:28:57.156936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.157244 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:57.159882 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160307 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.160334 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160545 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161086 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161266 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161349 1147232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:28:57.161394 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.161460 1147232 ssh_runner.go:195] Run: cat /version.json
	I0731 21:28:57.161481 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.164126 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164511 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.164552 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164583 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164719 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.164942 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165001 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.165022 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.165106 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165194 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.165277 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.165369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165536 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165692 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.261717 1147232 ssh_runner.go:195] Run: systemctl --version
	I0731 21:28:57.267459 1147232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:28:57.412757 1147232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:28:57.418248 1147232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:28:57.418317 1147232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:28:57.437752 1147232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:28:57.437786 1147232 start.go:495] detecting cgroup driver to use...
	I0731 21:28:57.437874 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:28:57.456832 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:28:57.472719 1147232 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:28:57.472803 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:28:57.486630 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:28:57.500635 1147232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:28:57.626291 1147232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:28:57.775374 1147232 docker.go:233] disabling docker service ...
	I0731 21:28:57.775563 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:28:57.789797 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:28:57.803545 1147232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:28:57.944871 1147232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:28:58.088067 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:28:58.112885 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:28:58.133234 1147232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:28:58.133301 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.144149 1147232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:28:58.144234 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.154684 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.165572 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.176638 1147232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:28:58.187948 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.198949 1147232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.219594 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.230888 1147232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:28:58.241112 1147232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:28:58.241175 1147232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:28:58.255158 1147232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:28:58.265191 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:28:58.401923 1147232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:28:58.534900 1147232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:28:58.534980 1147232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:28:58.539618 1147232 start.go:563] Will wait 60s for crictl version
	I0731 21:28:58.539700 1147232 ssh_runner.go:195] Run: which crictl
	I0731 21:28:58.543605 1147232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:28:58.578544 1147232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:28:58.578653 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.608074 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.638975 1147232 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:28:58.640454 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:58.643630 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644168 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:58.644204 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644497 1147232 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 21:28:58.648555 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:28:58.661131 1147232 kubeadm.go:883] updating cluster {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:28:58.661262 1147232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:28:58.661307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:28:58.696977 1147232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:28:58.697058 1147232 ssh_runner.go:195] Run: which lz4
	I0731 21:28:58.700913 1147232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:28:58.705097 1147232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:28:58.705135 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:28:57.185854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .Start
	I0731 21:28:57.186093 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring networks are active...
	I0731 21:28:57.186915 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network default is active
	I0731 21:28:57.187268 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network mk-old-k8s-version-275462 is active
	I0731 21:28:57.187627 1147424 main.go:141] libmachine: (old-k8s-version-275462) Getting domain xml...
	I0731 21:28:57.188447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:28:58.502711 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting to get IP...
	I0731 21:28:58.503791 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.504272 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.504341 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.504250 1148436 retry.go:31] will retry after 309.193175ms: waiting for machine to come up
	I0731 21:28:58.815172 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.815690 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.815745 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.815657 1148436 retry.go:31] will retry after 271.329404ms: waiting for machine to come up
	I0731 21:28:59.089281 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.089738 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.089778 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.089705 1148436 retry.go:31] will retry after 354.250517ms: waiting for machine to come up
	I0731 21:28:59.445390 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.445869 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.445895 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.445823 1148436 retry.go:31] will retry after 434.740787ms: waiting for machine to come up
	I0731 21:29:00.142120 1147232 crio.go:462] duration metric: took 1.441232682s to copy over tarball
	I0731 21:29:00.142222 1147232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:02.454101 1147232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.311834948s)
	I0731 21:29:02.454139 1147232 crio.go:469] duration metric: took 2.311975688s to extract the tarball
	I0731 21:29:02.454150 1147232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:02.493307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:02.541225 1147232 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:02.541257 1147232 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:02.541268 1147232 kubeadm.go:934] updating node { 192.168.50.203 8443 v1.30.3 crio true true} ...
	I0731 21:29:02.541448 1147232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-563652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:02.541548 1147232 ssh_runner.go:195] Run: crio config
	I0731 21:29:02.586951 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:02.586976 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:02.586989 1147232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:02.587016 1147232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.203 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-563652 NodeName:embed-certs-563652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:02.587188 1147232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-563652"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:02.587287 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:02.598944 1147232 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:02.599041 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:02.610271 1147232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0731 21:29:02.627952 1147232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:02.644727 1147232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0731 21:29:02.661985 1147232 ssh_runner.go:195] Run: grep 192.168.50.203	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:02.665903 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:02.678010 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:02.809768 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:02.826650 1147232 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652 for IP: 192.168.50.203
	I0731 21:29:02.826682 1147232 certs.go:194] generating shared ca certs ...
	I0731 21:29:02.826704 1147232 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:02.826923 1147232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:02.826988 1147232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:02.827005 1147232 certs.go:256] generating profile certs ...
	I0731 21:29:02.827126 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/client.key
	I0731 21:29:02.827208 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key.0963b177
	I0731 21:29:02.827279 1147232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key
	I0731 21:29:02.827458 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:02.827515 1147232 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:02.827533 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:02.827563 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:02.827598 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:02.827630 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:02.827690 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:02.828735 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:02.862923 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:02.907648 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:02.950647 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:02.978032 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 21:29:03.007119 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:29:03.031483 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:03.055190 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:03.079296 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:03.102817 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:03.126115 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:03.149887 1147232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:03.167213 1147232 ssh_runner.go:195] Run: openssl version
	I0731 21:29:03.172827 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:03.183821 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188216 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188290 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.193896 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:03.204706 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:03.215687 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220061 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220148 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.226469 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:03.237668 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:03.248629 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.252962 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.253032 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.258590 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:03.269656 1147232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:03.274277 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:03.280438 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:03.286378 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:03.292717 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:03.298776 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:03.305022 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:03.311507 1147232 kubeadm.go:392] StartCluster: {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:03.311608 1147232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:03.311676 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.349359 1147232 cri.go:89] found id: ""
	I0731 21:29:03.349457 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:03.359993 1147232 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:03.360015 1147232 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:03.360058 1147232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:03.371322 1147232 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:03.372350 1147232 kubeconfig.go:125] found "embed-certs-563652" server: "https://192.168.50.203:8443"
	I0731 21:29:03.374391 1147232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:03.386008 1147232 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.203
	I0731 21:29:03.386053 1147232 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:03.386069 1147232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:03.386141 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.428902 1147232 cri.go:89] found id: ""
	I0731 21:29:03.429001 1147232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:03.445950 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:03.455917 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:03.455954 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:03.456007 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:03.465688 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:03.465757 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:03.475699 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:03.485103 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:03.485179 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:03.495141 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.504430 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:03.504532 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.514523 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:03.524199 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:03.524280 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:03.533924 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:03.546105 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:03.656770 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:28:59.882326 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.882926 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.882959 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.882880 1148436 retry.go:31] will retry after 563.345278ms: waiting for machine to come up
	I0731 21:29:00.447702 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:00.448213 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:00.448245 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:00.448155 1148436 retry.go:31] will retry after 605.062991ms: waiting for machine to come up
	I0731 21:29:01.055120 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.055541 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.055564 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.055484 1148436 retry.go:31] will retry after 781.785142ms: waiting for machine to come up
	I0731 21:29:01.838536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.839123 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.839148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.839075 1148436 retry.go:31] will retry after 1.037287171s: waiting for machine to come up
	I0731 21:29:02.878421 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:02.878828 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:02.878860 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:02.878794 1148436 retry.go:31] will retry after 1.796829213s: waiting for machine to come up
	I0731 21:29:04.677338 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:04.677928 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:04.677963 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:04.677848 1148436 retry.go:31] will retry after 2.083632912s: waiting for machine to come up
	I0731 21:29:04.982138 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.325308339s)
	I0731 21:29:04.982177 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.196591 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.261920 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.343027 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:05.343137 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:05.844024 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.344246 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.360837 1147232 api_server.go:72] duration metric: took 1.017810929s to wait for apiserver process to appear ...
	I0731 21:29:06.360880 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:06.360916 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:06.361563 1147232 api_server.go:269] stopped: https://192.168.50.203:8443/healthz: Get "https://192.168.50.203:8443/healthz": dial tcp 192.168.50.203:8443: connect: connection refused
	I0731 21:29:06.861091 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.297633 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.297674 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.297691 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.335524 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.335568 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.361820 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.374624 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.374671 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:06.764436 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:06.764979 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:06.765012 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:06.764918 1148436 retry.go:31] will retry after 2.092811182s: waiting for machine to come up
	I0731 21:29:08.860056 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:08.860536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:08.860571 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:08.860498 1148436 retry.go:31] will retry after 2.731015709s: waiting for machine to come up
	I0731 21:29:09.861443 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.865941 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.865978 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.361710 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.365984 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:10.366014 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.861702 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.866015 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:29:10.872799 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:10.872831 1147232 api_server.go:131] duration metric: took 4.511944174s to wait for apiserver health ...
	I0731 21:29:10.872842 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:10.872848 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:10.874719 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:10.876229 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:10.886256 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:10.903893 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:10.913974 1147232 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:10.914021 1147232 system_pods.go:61] "coredns-7db6d8ff4d-kscsg" [260d2d5f-fd44-4a0a-813b-fab424728e55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:10.914031 1147232 system_pods.go:61] "etcd-embed-certs-563652" [e278abd0-801d-4156-bcc4-8f0d35a34b2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:10.914045 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [1398c865-6871-45c2-ad93-45b629d1d3c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:10.914055 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [0fbefc31-9024-41cb-b56a-944add33a901] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:10.914066 1147232 system_pods.go:61] "kube-proxy-m4www" [cb2d9b36-d71f-4986-9fb1-547e76fd2e77] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:10.914076 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [15887051-7657-4bf6-a9ca-3d834d8eb4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:10.914089 1147232 system_pods.go:61] "metrics-server-569cc877fc-6jkw9" [eb41d2c6-c267-486d-83eb-25e5578b1e6e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:10.914100 1147232 system_pods.go:61] "storage-provisioner" [5fc70da7-6dac-4e44-865c-495fd5fec485] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:10.914112 1147232 system_pods.go:74] duration metric: took 10.188078ms to wait for pod list to return data ...
	I0731 21:29:10.914125 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:10.917224 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:10.917258 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:10.917272 1147232 node_conditions.go:105] duration metric: took 3.140281ms to run NodePressure ...
	I0731 21:29:10.917294 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:11.176463 1147232 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180506 1147232 kubeadm.go:739] kubelet initialised
	I0731 21:29:11.180529 1147232 kubeadm.go:740] duration metric: took 4.03724ms waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180540 1147232 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:11.185366 1147232 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:13.197693 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:11.594836 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:11.595339 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:11.595374 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:11.595293 1148436 retry.go:31] will retry after 4.520307648s: waiting for machine to come up
	I0731 21:29:17.633145 1148013 start.go:364] duration metric: took 1m51.491197772s to acquireMachinesLock for "default-k8s-diff-port-755535"
	I0731 21:29:17.633242 1148013 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:17.633255 1148013 fix.go:54] fixHost starting: 
	I0731 21:29:17.633764 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:17.633823 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:17.654593 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0731 21:29:17.655124 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:17.655734 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:17.655770 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:17.656109 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:17.656359 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:17.656530 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:17.658542 1148013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-755535: state=Stopped err=<nil>
	I0731 21:29:17.658585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	W0731 21:29:17.658784 1148013 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:17.660580 1148013 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-755535" ...
	I0731 21:29:16.120431 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120937 1147424 main.go:141] libmachine: (old-k8s-version-275462) Found IP for machine: 192.168.72.107
	I0731 21:29:16.120961 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has current primary IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120968 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserving static IP address...
	I0731 21:29:16.121466 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.121508 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserved static IP address: 192.168.72.107
	I0731 21:29:16.121528 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | skip adding static IP to network mk-old-k8s-version-275462 - found existing host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"}
	I0731 21:29:16.121561 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Getting to WaitForSSH function...
	I0731 21:29:16.121599 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting for SSH to be available...
	I0731 21:29:16.123460 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123825 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.123849 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123954 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH client type: external
	I0731 21:29:16.123988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa (-rw-------)
	I0731 21:29:16.124019 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:16.124034 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | About to run SSH command:
	I0731 21:29:16.124049 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | exit 0
	I0731 21:29:16.244331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:16.244741 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:29:16.245387 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.248072 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248502 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.248529 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248857 1147424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:29:16.249132 1147424 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:16.249162 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:16.249412 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.252283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252657 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.252687 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.253096 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253286 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253433 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.253606 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.253875 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.253895 1147424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:16.356702 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:16.356743 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357088 1147424 buildroot.go:166] provisioning hostname "old-k8s-version-275462"
	I0731 21:29:16.357116 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357303 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.361044 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361504 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.361540 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361801 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.362037 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362252 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362430 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.362618 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.362866 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.362884 1147424 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275462 && echo "old-k8s-version-275462" | sudo tee /etc/hostname
	I0731 21:29:16.478590 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275462
	
	I0731 21:29:16.478635 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.481767 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.482184 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482467 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.482716 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.482888 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.483083 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.483323 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.483529 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.483554 1147424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275462/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:16.597465 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:16.597515 1147424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:16.597549 1147424 buildroot.go:174] setting up certificates
	I0731 21:29:16.597563 1147424 provision.go:84] configureAuth start
	I0731 21:29:16.597578 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.597901 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.600943 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601347 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.601388 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601582 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.604296 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604757 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.604787 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604950 1147424 provision.go:143] copyHostCerts
	I0731 21:29:16.605019 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:16.605037 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:16.605108 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:16.605235 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:16.605249 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:16.605285 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:16.605370 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:16.605381 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:16.605407 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:16.605474 1147424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275462 san=[127.0.0.1 192.168.72.107 localhost minikube old-k8s-version-275462]
	I0731 21:29:16.959571 1147424 provision.go:177] copyRemoteCerts
	I0731 21:29:16.959637 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:16.959671 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.962543 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.962955 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.962988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.963253 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.963483 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.963690 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.963885 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.047050 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:17.072833 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 21:29:17.099214 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:17.125846 1147424 provision.go:87] duration metric: took 528.260173ms to configureAuth
	I0731 21:29:17.125892 1147424 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:17.126109 1147424 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:29:17.126194 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.129283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129568 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.129602 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129926 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.130232 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130458 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130601 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.130820 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.131002 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.131016 1147424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:17.395537 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:17.395569 1147424 machine.go:97] duration metric: took 1.146418308s to provisionDockerMachine
	I0731 21:29:17.395581 1147424 start.go:293] postStartSetup for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:29:17.395598 1147424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:17.395639 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.395987 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:17.396024 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.398916 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399233 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.399264 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.399674 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.399854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.400026 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.483331 1147424 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:17.487820 1147424 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:17.487856 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:17.487925 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:17.488012 1147424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:17.488186 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:17.499484 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:17.525699 1147424 start.go:296] duration metric: took 130.099417ms for postStartSetup
	I0731 21:29:17.525756 1147424 fix.go:56] duration metric: took 20.368597161s for fixHost
	I0731 21:29:17.525785 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.529040 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529525 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.529570 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.530095 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530310 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530481 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.530704 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.530879 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.530890 1147424 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:17.632991 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461357.608223429
	
	I0731 21:29:17.633011 1147424 fix.go:216] guest clock: 1722461357.608223429
	I0731 21:29:17.633018 1147424 fix.go:229] Guest: 2024-07-31 21:29:17.608223429 +0000 UTC Remote: 2024-07-31 21:29:17.525761122 +0000 UTC m=+242.704537445 (delta=82.462307ms)
	I0731 21:29:17.633040 1147424 fix.go:200] guest clock delta is within tolerance: 82.462307ms
	I0731 21:29:17.633045 1147424 start.go:83] releasing machines lock for "old-k8s-version-275462", held for 20.475925282s
	I0731 21:29:17.633069 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.633360 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:17.636188 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636565 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.636598 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636792 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637346 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637569 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637674 1147424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:17.637721 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.637831 1147424 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:17.637861 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.640574 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640772 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640966 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.640996 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641174 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641297 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.641331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641371 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641511 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641564 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641680 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641846 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641886 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.642184 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.716822 1147424 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:17.741404 1147424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:17.892700 1147424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:17.899143 1147424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:17.899252 1147424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:17.915997 1147424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:17.916032 1147424 start.go:495] detecting cgroup driver to use...
	I0731 21:29:17.916133 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:17.933847 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:17.948471 1147424 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:17.948565 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:17.963294 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:17.978417 1147424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:18.100521 1147424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:18.243022 1147424 docker.go:233] disabling docker service ...
	I0731 21:29:18.243104 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:18.258762 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:18.272012 1147424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:18.421137 1147424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:18.564600 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:18.581019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:18.601426 1147424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:29:18.601504 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.617312 1147424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:18.617400 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.631697 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.642487 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.654548 1147424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:18.666338 1147424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:18.676326 1147424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:18.676406 1147424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:18.690225 1147424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:18.702315 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:18.836795 1147424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:18.977840 1147424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:18.977930 1147424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:18.984979 1147424 start.go:563] Will wait 60s for crictl version
	I0731 21:29:18.985059 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:18.989654 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:19.033602 1147424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:19.033701 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.061583 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.093228 1147424 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:29:15.692077 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:18.191423 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:19.094804 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:19.098122 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.098620 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:19.098648 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.099016 1147424 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:19.103372 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:19.117035 1147424 kubeadm.go:883] updating cluster {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:19.117205 1147424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:29:19.117275 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:19.163252 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:19.163343 1147424 ssh_runner.go:195] Run: which lz4
	I0731 21:29:19.168173 1147424 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:19.172513 1147424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:19.172576 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:29:17.662009 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Start
	I0731 21:29:17.662245 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring networks are active...
	I0731 21:29:17.663121 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network default is active
	I0731 21:29:17.663583 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network mk-default-k8s-diff-port-755535 is active
	I0731 21:29:17.664059 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Getting domain xml...
	I0731 21:29:17.664837 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Creating domain...
	I0731 21:29:18.989801 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting to get IP...
	I0731 21:29:18.990936 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991376 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991428 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:18.991344 1148583 retry.go:31] will retry after 247.770384ms: waiting for machine to come up
	I0731 21:29:19.241063 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241658 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.241549 1148583 retry.go:31] will retry after 287.808437ms: waiting for machine to come up
	I0731 21:29:19.531237 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531849 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531875 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.531777 1148583 retry.go:31] will retry after 317.584035ms: waiting for machine to come up
	I0731 21:29:19.851691 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852167 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852202 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.852128 1148583 retry.go:31] will retry after 555.57435ms: waiting for machine to come up
	I0731 21:29:20.409812 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410356 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410392 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:20.410280 1148583 retry.go:31] will retry after 721.969177ms: waiting for machine to come up
	I0731 21:29:20.195383 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:20.703603 1147232 pod_ready.go:92] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.703634 1147232 pod_ready.go:81] duration metric: took 9.51823955s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.703649 1147232 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724000 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.724036 1147232 pod_ready.go:81] duration metric: took 20.374673ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724051 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732302 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.732326 1147232 pod_ready.go:81] duration metric: took 8.267565ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732340 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747581 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.747609 1147232 pod_ready.go:81] duration metric: took 2.015261928s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747619 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753322 1147232 pod_ready.go:92] pod "kube-proxy-m4www" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.753348 1147232 pod_ready.go:81] duration metric: took 5.72252ms for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753359 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758310 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.758335 1147232 pod_ready.go:81] duration metric: took 4.970124ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758346 1147232 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.731858 1147424 crio.go:462] duration metric: took 1.563734165s to copy over tarball
	I0731 21:29:20.732033 1147424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:23.813579 1147424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081445019s)
	I0731 21:29:23.813629 1147424 crio.go:469] duration metric: took 3.081657576s to extract the tarball
	I0731 21:29:23.813640 1147424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:23.855937 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:23.892640 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:23.892676 1147424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:23.892772 1147424 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.892797 1147424 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.892852 1147424 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.892776 1147424 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.893142 1147424 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.893240 1147424 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:29:23.893343 1147424 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.893348 1147424 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.894880 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.895111 1147424 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.894968 1147424 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.895194 1147424 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.895489 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.895587 1147424 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:29:24.036855 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.039761 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.042658 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.045088 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.045098 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.048688 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.088535 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:29:24.218808 1147424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:29:24.218845 1147424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:29:24.218881 1147424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:29:24.218918 1147424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.218930 1147424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:29:24.218936 1147424 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:29:24.218943 1147424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.218965 1147424 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.218978 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.219058 1147424 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:29:24.219078 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219079 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219084 1147424 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.219135 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238540 1147424 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:29:24.238602 1147424 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:29:24.238653 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238678 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.238697 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.238736 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.238794 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.238802 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.238851 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.366795 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:29:24.371307 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:29:24.371394 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:29:24.371436 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:29:24.371516 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:29:24.380026 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:29:24.380043 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:29:24.412112 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:29:24.523420 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:24.671943 1147424 cache_images.go:92] duration metric: took 779.240281ms to LoadCachedImages
	W0731 21:29:24.672078 1147424 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0731 21:29:24.672114 1147424 kubeadm.go:934] updating node { 192.168.72.107 8443 v1.20.0 crio true true} ...
	I0731 21:29:24.672267 1147424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:24.672897 1147424 ssh_runner.go:195] Run: crio config
	I0731 21:29:24.722662 1147424 cni.go:84] Creating CNI manager for ""
	I0731 21:29:24.722686 1147424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:24.722696 1147424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:24.722717 1147424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275462 NodeName:old-k8s-version-275462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:29:24.722892 1147424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275462"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:24.722962 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:29:24.733178 1147424 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:24.733273 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:24.743515 1147424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 21:29:24.760826 1147424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:24.779805 1147424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:29:24.798560 1147424 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:24.802406 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:24.815015 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:21.134251 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134731 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134764 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:21.134687 1148583 retry.go:31] will retry after 934.566416ms: waiting for machine to come up
	I0731 21:29:22.071038 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071631 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.071554 1148583 retry.go:31] will retry after 884.282326ms: waiting for machine to come up
	I0731 21:29:22.957241 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957617 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.957599 1148583 retry.go:31] will retry after 1.014946816s: waiting for machine to come up
	I0731 21:29:23.974435 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974845 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974883 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:23.974807 1148583 retry.go:31] will retry after 1.519800108s: waiting for machine to come up
	I0731 21:29:25.496770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497303 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:25.497249 1148583 retry.go:31] will retry after 1.739198883s: waiting for machine to come up
	I0731 21:29:24.767123 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:27.265952 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:29.266044 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:24.937628 1147424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:24.956917 1147424 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462 for IP: 192.168.72.107
	I0731 21:29:24.956949 1147424 certs.go:194] generating shared ca certs ...
	I0731 21:29:24.956972 1147424 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:24.957180 1147424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:24.957243 1147424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:24.957258 1147424 certs.go:256] generating profile certs ...
	I0731 21:29:24.957385 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key
	I0731 21:29:24.957468 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421
	I0731 21:29:24.957520 1147424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key
	I0731 21:29:24.957676 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:24.957719 1147424 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:24.957734 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:24.957770 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:24.957805 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:24.957837 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:24.957898 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:24.958772 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:24.998159 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:25.057520 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:25.098374 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:25.140601 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:29:25.187540 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:25.213821 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:25.240997 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:25.266970 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:25.292340 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:25.318838 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:25.344071 1147424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:25.361756 1147424 ssh_runner.go:195] Run: openssl version
	I0731 21:29:25.368009 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:25.379741 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.384975 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.385052 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.390894 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:25.403007 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:25.415067 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422223 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422310 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.429842 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:25.440874 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:25.451684 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456190 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456259 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.462311 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:25.474253 1147424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:25.479088 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:25.485188 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:25.491404 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:25.498223 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:25.504935 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:25.511202 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:25.517628 1147424 kubeadm.go:392] StartCluster: {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:25.517767 1147424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:25.517832 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.555145 1147424 cri.go:89] found id: ""
	I0731 21:29:25.555227 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:25.565732 1147424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:25.565758 1147424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:25.565821 1147424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:25.575700 1147424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:25.576730 1147424 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:25.577437 1147424 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-275462" cluster setting kubeconfig missing "old-k8s-version-275462" context setting]
	I0731 21:29:25.578357 1147424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:25.626975 1147424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:25.637707 1147424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.107
	I0731 21:29:25.637758 1147424 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:25.637773 1147424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:25.637826 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.674153 1147424 cri.go:89] found id: ""
	I0731 21:29:25.674240 1147424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:25.692354 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:25.703047 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:25.703081 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:25.703140 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:25.712766 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:25.712884 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:25.723121 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:25.732767 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:25.732846 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:25.743055 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.752622 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:25.752699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.763763 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:25.773620 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:25.773699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:25.784175 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:25.794182 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:25.908515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.676104 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.891081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.024837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.100397 1147424 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:27.100499 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.600582 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.101391 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.601068 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.101502 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.600838 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.239418 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239872 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:27.239806 1148583 retry.go:31] will retry after 1.907805681s: waiting for machine to come up
	I0731 21:29:29.149605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150022 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150049 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:29.149966 1148583 retry.go:31] will retry after 3.584697795s: waiting for machine to come up
	I0731 21:29:31.765270 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:34.264994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:30.101071 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:30.601377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.100907 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.600736 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.100741 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.601406 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.100616 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.601476 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.101619 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.601270 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.736055 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736539 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:32.736495 1148583 retry.go:31] will retry after 4.026783834s: waiting for machine to come up
	I0731 21:29:38.016998 1146656 start.go:364] duration metric: took 55.868098686s to acquireMachinesLock for "no-preload-018891"
	I0731 21:29:38.017060 1146656 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:38.017069 1146656 fix.go:54] fixHost starting: 
	I0731 21:29:38.017509 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:38.017552 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:38.036034 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0731 21:29:38.036681 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:38.037291 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:29:38.037319 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:38.037687 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:38.037920 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:38.038078 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:29:38.040079 1146656 fix.go:112] recreateIfNeeded on no-preload-018891: state=Stopped err=<nil>
	I0731 21:29:38.040133 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	W0731 21:29:38.040317 1146656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:38.042575 1146656 out.go:177] * Restarting existing kvm2 VM for "no-preload-018891" ...
	I0731 21:29:36.766344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:39.265931 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:36.767067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767688 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has current primary IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767744 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Found IP for machine: 192.168.39.145
	I0731 21:29:36.767774 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserving static IP address...
	I0731 21:29:36.768193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.768234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | skip adding static IP to network mk-default-k8s-diff-port-755535 - found existing host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"}
	I0731 21:29:36.768256 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserved static IP address: 192.168.39.145
	I0731 21:29:36.768277 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for SSH to be available...
	I0731 21:29:36.768292 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Getting to WaitForSSH function...
	I0731 21:29:36.770423 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.770710 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770880 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH client type: external
	I0731 21:29:36.770909 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa (-rw-------)
	I0731 21:29:36.770966 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:36.770989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | About to run SSH command:
	I0731 21:29:36.771004 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | exit 0
	I0731 21:29:36.892321 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:36.892633 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetConfigRaw
	I0731 21:29:36.893372 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:36.896249 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896647 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.896682 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896983 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:29:36.897231 1148013 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:36.897253 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:36.897507 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:36.900381 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900794 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.900832 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900940 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:36.901137 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901403 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:36.901591 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:36.901809 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:36.901823 1148013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:37.004424 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:37.004459 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004749 1148013 buildroot.go:166] provisioning hostname "default-k8s-diff-port-755535"
	I0731 21:29:37.004770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.007987 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008391 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.008439 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.008802 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.008981 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.009190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.009374 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.009588 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.009602 1148013 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-755535 && echo "default-k8s-diff-port-755535" | sudo tee /etc/hostname
	I0731 21:29:37.127160 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-755535
	
	I0731 21:29:37.127190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.130282 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130701 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.130737 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130924 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.131178 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131389 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131537 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.131778 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.132017 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.132037 1148013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-755535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-755535/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-755535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:37.245157 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:37.245201 1148013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:37.245255 1148013 buildroot.go:174] setting up certificates
	I0731 21:29:37.245268 1148013 provision.go:84] configureAuth start
	I0731 21:29:37.245283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.245628 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:37.248611 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.248910 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.248944 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.249109 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.251332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251698 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.251727 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251911 1148013 provision.go:143] copyHostCerts
	I0731 21:29:37.251973 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:37.251983 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:37.252036 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:37.252164 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:37.252173 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:37.252196 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:37.252258 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:37.252265 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:37.252283 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:37.252334 1148013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-755535 san=[127.0.0.1 192.168.39.145 default-k8s-diff-port-755535 localhost minikube]
	I0731 21:29:37.356985 1148013 provision.go:177] copyRemoteCerts
	I0731 21:29:37.357046 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:37.357077 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.359635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.359985 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.360014 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.360217 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.360421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.360670 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.360815 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.442709 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:37.467795 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 21:29:37.492389 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:29:37.515837 1148013 provision.go:87] duration metric: took 270.547831ms to configureAuth
	I0731 21:29:37.515882 1148013 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:37.516070 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:37.516200 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.519062 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519432 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.519469 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519695 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.519920 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520141 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520323 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.520481 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.520701 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.520726 1148013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:37.780006 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:37.780033 1148013 machine.go:97] duration metric: took 882.786941ms to provisionDockerMachine
	I0731 21:29:37.780047 1148013 start.go:293] postStartSetup for "default-k8s-diff-port-755535" (driver="kvm2")
	I0731 21:29:37.780059 1148013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:37.780081 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:37.780459 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:37.780493 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.783495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.783853 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.783886 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.784068 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.784322 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.784531 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.784714 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.866990 1148013 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:37.871294 1148013 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:37.871329 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:37.871408 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:37.871483 1148013 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:37.871584 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:37.881107 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:37.906964 1148013 start.go:296] duration metric: took 126.897843ms for postStartSetup
	I0731 21:29:37.907016 1148013 fix.go:56] duration metric: took 20.273760895s for fixHost
	I0731 21:29:37.907045 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.910120 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910452 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.910495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910747 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.910965 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911119 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911255 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.911448 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.911690 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.911705 1148013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:38.016788 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461377.990571620
	
	I0731 21:29:38.016818 1148013 fix.go:216] guest clock: 1722461377.990571620
	I0731 21:29:38.016830 1148013 fix.go:229] Guest: 2024-07-31 21:29:37.99057162 +0000 UTC Remote: 2024-07-31 21:29:37.907020915 +0000 UTC m=+131.913986687 (delta=83.550705ms)
	I0731 21:29:38.016876 1148013 fix.go:200] guest clock delta is within tolerance: 83.550705ms
	I0731 21:29:38.016883 1148013 start.go:83] releasing machines lock for "default-k8s-diff-port-755535", held for 20.383695886s
	I0731 21:29:38.016916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.017234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:38.019995 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020405 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.020436 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020641 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021180 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021387 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021485 1148013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:38.021536 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.021665 1148013 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:38.021693 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.024445 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024777 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024913 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.024946 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025214 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025258 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.025291 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025461 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025626 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.025640 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025915 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025907 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.026067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.026237 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.129588 1148013 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:38.135557 1148013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:38.276230 1148013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:38.281894 1148013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:38.281977 1148013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:38.298709 1148013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:38.298742 1148013 start.go:495] detecting cgroup driver to use...
	I0731 21:29:38.298815 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:38.316212 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:38.331845 1148013 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:38.331925 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:38.350284 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:38.365411 1148013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:38.502379 1148013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:38.659435 1148013 docker.go:233] disabling docker service ...
	I0731 21:29:38.659544 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:38.676451 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:38.692936 1148013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:38.843766 1148013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:38.974723 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:38.989514 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:39.009753 1148013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:29:39.009822 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.020785 1148013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:39.020857 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.031679 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.047024 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.061692 1148013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:39.072901 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.084049 1148013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.101694 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.118920 1148013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:39.128796 1148013 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:39.128869 1148013 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:39.143329 1148013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:39.153376 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:39.278414 1148013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:39.427377 1148013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:39.427493 1148013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:39.432178 1148013 start.go:563] Will wait 60s for crictl version
	I0731 21:29:39.432262 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:29:39.435949 1148013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:39.470366 1148013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:39.470494 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.498247 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.531071 1148013 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:29:35.101055 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:35.600782 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.101344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.600794 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.101402 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.601198 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.100947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.601332 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.101351 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.601319 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.532416 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:39.535677 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536015 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:39.536046 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536341 1148013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:39.540305 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:39.553333 1148013 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:39.553464 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:29:39.553514 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:39.592137 1148013 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:29:39.592216 1148013 ssh_runner.go:195] Run: which lz4
	I0731 21:29:39.596215 1148013 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:39.600203 1148013 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:39.600244 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:29:41.004825 1148013 crio.go:462] duration metric: took 1.408653613s to copy over tarball
	I0731 21:29:41.004930 1148013 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:38.043667 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Start
	I0731 21:29:38.043892 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring networks are active...
	I0731 21:29:38.044764 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network default is active
	I0731 21:29:38.045177 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network mk-no-preload-018891 is active
	I0731 21:29:38.045594 1146656 main.go:141] libmachine: (no-preload-018891) Getting domain xml...
	I0731 21:29:38.046459 1146656 main.go:141] libmachine: (no-preload-018891) Creating domain...
	I0731 21:29:39.353762 1146656 main.go:141] libmachine: (no-preload-018891) Waiting to get IP...
	I0731 21:29:39.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.355279 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.355383 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.355255 1148782 retry.go:31] will retry after 234.245005ms: waiting for machine to come up
	I0731 21:29:39.590814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.591332 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.591358 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.591270 1148782 retry.go:31] will retry after 362.949809ms: waiting for machine to come up
	I0731 21:29:39.956112 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.956694 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.956721 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.956639 1148782 retry.go:31] will retry after 469.324659ms: waiting for machine to come up
	I0731 21:29:40.427518 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.427997 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.428027 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.427953 1148782 retry.go:31] will retry after 463.172567ms: waiting for machine to come up
	I0731 21:29:40.893318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.893864 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.893890 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.893824 1148782 retry.go:31] will retry after 599.834904ms: waiting for machine to come up
	I0731 21:29:41.495844 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:41.496342 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:41.496372 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:41.496291 1148782 retry.go:31] will retry after 856.360903ms: waiting for machine to come up
	I0731 21:29:41.266267 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:43.267009 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:40.101530 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:40.601303 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.100720 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.600723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.100890 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.100765 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.101217 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.356436 1148013 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.351465263s)
	I0731 21:29:43.356470 1148013 crio.go:469] duration metric: took 2.351606996s to extract the tarball
	I0731 21:29:43.356479 1148013 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:43.397583 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:43.443757 1148013 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:43.443784 1148013 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:43.443793 1148013 kubeadm.go:934] updating node { 192.168.39.145 8444 v1.30.3 crio true true} ...
	I0731 21:29:43.443954 1148013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-755535 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:43.444026 1148013 ssh_runner.go:195] Run: crio config
	I0731 21:29:43.494935 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:43.494959 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:43.494973 1148013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:43.495006 1148013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-755535 NodeName:default-k8s-diff-port-755535 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:43.495210 1148013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-755535"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:43.495303 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:43.505057 1148013 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:43.505176 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:43.514741 1148013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 21:29:43.534865 1148013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:43.554763 1148013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 21:29:43.572433 1148013 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:43.577403 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:43.592858 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:43.737530 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:43.754632 1148013 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535 for IP: 192.168.39.145
	I0731 21:29:43.754662 1148013 certs.go:194] generating shared ca certs ...
	I0731 21:29:43.754686 1148013 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:43.754900 1148013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:43.754960 1148013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:43.754976 1148013 certs.go:256] generating profile certs ...
	I0731 21:29:43.755093 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/client.key
	I0731 21:29:43.755177 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key.22420a8f
	I0731 21:29:43.755227 1148013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key
	I0731 21:29:43.755381 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:43.755424 1148013 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:43.755434 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:43.755455 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:43.755480 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:43.755500 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:43.755539 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:43.756235 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:43.800725 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:43.835648 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:43.880032 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:43.915459 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 21:29:43.943694 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:43.968578 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:43.993192 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:29:44.017364 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:44.041303 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:44.065792 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:44.089991 1148013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:44.107888 1148013 ssh_runner.go:195] Run: openssl version
	I0731 21:29:44.113758 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:44.125576 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130648 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130727 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.137311 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:44.149135 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:44.160439 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165263 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165329 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.171250 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:44.182798 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:44.194037 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198577 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198658 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.204406 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:44.215573 1148013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:44.221587 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:44.229391 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:44.237371 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:44.244379 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:44.250414 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:44.256557 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:44.262804 1148013 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:44.262928 1148013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:44.262993 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.298720 1148013 cri.go:89] found id: ""
	I0731 21:29:44.298826 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:44.310173 1148013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:44.310199 1148013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:44.310258 1148013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:44.321273 1148013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:44.322769 1148013 kubeconfig.go:125] found "default-k8s-diff-port-755535" server: "https://192.168.39.145:8444"
	I0731 21:29:44.325832 1148013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:44.336366 1148013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.145
	I0731 21:29:44.336407 1148013 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:44.336427 1148013 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:44.336498 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.383500 1148013 cri.go:89] found id: ""
	I0731 21:29:44.383591 1148013 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:44.399444 1148013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:44.410687 1148013 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:44.410711 1148013 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:44.410769 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 21:29:44.420845 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:44.420925 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:44.430476 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 21:29:44.440198 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:44.440277 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:44.450195 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.459883 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:44.459966 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.470649 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 21:29:44.480689 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:44.480764 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:44.490628 1148013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:44.501343 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:44.642878 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.555233 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.766976 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.832896 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.907410 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:45.907508 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.354282 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:42.354765 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:42.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:42.354694 1148782 retry.go:31] will retry after 1.044468751s: waiting for machine to come up
	I0731 21:29:43.400835 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:43.401345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:43.401402 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:43.401318 1148782 retry.go:31] will retry after 935.157631ms: waiting for machine to come up
	I0731 21:29:44.337853 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:44.338472 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:44.338505 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:44.338397 1148782 retry.go:31] will retry after 1.530891122s: waiting for machine to come up
	I0731 21:29:45.871035 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:45.871693 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:45.871734 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:45.871617 1148782 retry.go:31] will retry after 1.996010352s: waiting for machine to come up
	I0731 21:29:45.765589 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:47.765743 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:45.100963 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:45.601355 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.101354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.601416 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.100953 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.601551 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.100775 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.601528 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.101362 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.601101 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.407820 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.907790 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.924949 1148013 api_server.go:72] duration metric: took 1.017537991s to wait for apiserver process to appear ...
	I0731 21:29:46.924989 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:46.925016 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:49.933387 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:49.933431 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:49.933448 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.002123 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.002156 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.425320 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.430430 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.430465 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.926039 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.931251 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.931286 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:51.425157 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:51.430486 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:29:51.437067 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:51.437115 1148013 api_server.go:131] duration metric: took 4.512116778s to wait for apiserver health ...
	I0731 21:29:51.437131 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:51.437142 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:51.438770 1148013 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:47.869470 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:47.869928 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:47.869960 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:47.869867 1148782 retry.go:31] will retry after 1.758316686s: waiting for machine to come up
	I0731 21:29:49.630515 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:49.631000 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:49.631036 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:49.630936 1148782 retry.go:31] will retry after 2.39654611s: waiting for machine to come up
	I0731 21:29:51.440057 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:51.460432 1148013 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:51.479629 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:51.491000 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:51.491059 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:51.491076 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:51.491087 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:51.491110 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:51.491124 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:51.491139 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:51.491150 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:51.491222 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:51.491236 1148013 system_pods.go:74] duration metric: took 11.579003ms to wait for pod list to return data ...
	I0731 21:29:51.491252 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:51.495163 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:51.495206 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:51.495239 1148013 node_conditions.go:105] duration metric: took 3.977024ms to run NodePressure ...
	I0731 21:29:51.495263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:51.762752 1148013 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768504 1148013 kubeadm.go:739] kubelet initialised
	I0731 21:29:51.768541 1148013 kubeadm.go:740] duration metric: took 5.756089ms waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768554 1148013 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:51.776242 1148013 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.783488 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783533 1148013 pod_ready.go:81] duration metric: took 7.250424ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.783547 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783558 1148013 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.790100 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790143 1148013 pod_ready.go:81] duration metric: took 6.573129ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.790159 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.797457 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797498 1148013 pod_ready.go:81] duration metric: took 7.319359ms for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.797513 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797533 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.883109 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883149 1148013 pod_ready.go:81] duration metric: took 85.605451ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.883162 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.283454 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283484 1148013 pod_ready.go:81] duration metric: took 400.306586ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.283495 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283511 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.682926 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682965 1148013 pod_ready.go:81] duration metric: took 399.442627ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.682982 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682991 1148013 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:53.083528 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083573 1148013 pod_ready.go:81] duration metric: took 400.571455ms for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:53.083590 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083601 1148013 pod_ready.go:38] duration metric: took 1.315033985s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:53.083623 1148013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:29:53.095349 1148013 ops.go:34] apiserver oom_adj: -16
	I0731 21:29:53.095379 1148013 kubeadm.go:597] duration metric: took 8.785172139s to restartPrimaryControlPlane
	I0731 21:29:53.095391 1148013 kubeadm.go:394] duration metric: took 8.832597905s to StartCluster
	I0731 21:29:53.095416 1148013 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.095513 1148013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:53.097384 1148013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.097693 1148013 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:29:53.097768 1148013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:29:53.097863 1148013 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097905 1148013 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.097914 1148013 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:29:53.097918 1148013 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097949 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.097956 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:53.097964 1148013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-755535"
	I0731 21:29:53.097960 1148013 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.098052 1148013 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.098070 1148013 addons.go:243] addon metrics-server should already be in state true
	I0731 21:29:53.098129 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.098364 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098389 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098405 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098465 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098544 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098578 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.099612 1148013 out.go:177] * Verifying Kubernetes components...
	I0731 21:29:53.100943 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:53.116043 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0731 21:29:53.116121 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0731 21:29:53.116663 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.116670 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.117278 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117297 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117558 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117575 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117662 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.118320 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.118358 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.118788 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0731 21:29:53.118820 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.119468 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.119498 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.119509 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.120181 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.120208 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.120626 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.120828 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.125024 1148013 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.125051 1148013 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:29:53.125087 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.125470 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.125510 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.136521 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0731 21:29:53.137246 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.137866 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.137907 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.138331 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.138574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.140269 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0731 21:29:53.140615 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.140722 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.141377 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.141402 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.141846 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.142108 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.142832 1148013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:53.143979 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0731 21:29:53.144037 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.144302 1148013 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.144321 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:29:53.144342 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.145270 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.145539 1148013 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:29:49.766048 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:52.266842 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:53.145875 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.145898 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.146651 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.146842 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:29:53.146863 1148013 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:29:53.146891 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.147198 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.147235 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.148082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149156 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.149247 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149438 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.149635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.149758 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.149890 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.150082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150593 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.150624 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150825 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.151024 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.151193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.151423 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.164594 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0731 21:29:53.165088 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.165634 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.165649 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.165919 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.166093 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.167775 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.168002 1148013 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.168016 1148013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:29:53.168032 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.171696 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172236 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.172266 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172492 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.172717 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.172890 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.173081 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.313528 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:53.332410 1148013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:29:53.467443 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.481915 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:29:53.481943 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:29:53.503095 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.524005 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:29:53.524039 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:29:53.577476 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:53.577511 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:29:53.630711 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:54.451991 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452029 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452078 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452115 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452387 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452404 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452412 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452526 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452551 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452565 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452574 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452582 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452667 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452684 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452691 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452849 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452869 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.458865 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.458888 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.459191 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.459208 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472307 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472337 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.472690 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.472706 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.472713 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472733 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472742 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.473021 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.473070 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.473074 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.473086 1148013 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-755535"
	I0731 21:29:54.474920 1148013 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:29:50.101380 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:50.601347 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.101325 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.601381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.101364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.600852 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.101284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.601020 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.101330 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.601310 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.476085 1148013 addons.go:510] duration metric: took 1.378326564s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:29:55.338873 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.029262 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:52.029780 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:52.029807 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:52.029695 1148782 retry.go:31] will retry after 2.74211918s: waiting for machine to come up
	I0731 21:29:54.773318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.773762 1146656 main.go:141] libmachine: (no-preload-018891) Found IP for machine: 192.168.61.246
	I0731 21:29:54.773788 1146656 main.go:141] libmachine: (no-preload-018891) Reserving static IP address...
	I0731 21:29:54.773803 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has current primary IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.774221 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.774260 1146656 main.go:141] libmachine: (no-preload-018891) DBG | skip adding static IP to network mk-no-preload-018891 - found existing host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"}
	I0731 21:29:54.774275 1146656 main.go:141] libmachine: (no-preload-018891) Reserved static IP address: 192.168.61.246
	I0731 21:29:54.774320 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Getting to WaitForSSH function...
	I0731 21:29:54.774343 1146656 main.go:141] libmachine: (no-preload-018891) Waiting for SSH to be available...
	I0731 21:29:54.776952 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.777352 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777426 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH client type: external
	I0731 21:29:54.777466 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa (-rw-------)
	I0731 21:29:54.777506 1146656 main.go:141] libmachine: (no-preload-018891) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:54.777522 1146656 main.go:141] libmachine: (no-preload-018891) DBG | About to run SSH command:
	I0731 21:29:54.777564 1146656 main.go:141] libmachine: (no-preload-018891) DBG | exit 0
	I0731 21:29:54.908253 1146656 main.go:141] libmachine: (no-preload-018891) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:54.908614 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetConfigRaw
	I0731 21:29:54.909339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:54.911937 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.912345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912621 1146656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/config.json ...
	I0731 21:29:54.912837 1146656 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:54.912858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:54.913092 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:54.915328 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915698 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.915725 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915862 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:54.916060 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916209 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916385 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:54.916563 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:54.916797 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:54.916812 1146656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:55.032674 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:55.032715 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033152 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:29:55.033189 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033429 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.036142 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036488 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.036553 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036710 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.036938 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037373 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.037586 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.037851 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.037869 1146656 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-018891 && echo "no-preload-018891" | sudo tee /etc/hostname
	I0731 21:29:55.170895 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-018891
	
	I0731 21:29:55.170923 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.174018 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174357 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.174382 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174594 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.174835 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175025 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175153 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.175333 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.175578 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.175595 1146656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018891/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:55.296570 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:55.296606 1146656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:55.296634 1146656 buildroot.go:174] setting up certificates
	I0731 21:29:55.296645 1146656 provision.go:84] configureAuth start
	I0731 21:29:55.296658 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.297022 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:55.299891 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300300 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.300329 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300525 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.302808 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303146 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.303176 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303306 1146656 provision.go:143] copyHostCerts
	I0731 21:29:55.303365 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:55.303375 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:55.303430 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:55.303533 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:55.303541 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:55.303565 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:55.303638 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:55.303645 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:55.303662 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:55.303773 1146656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.no-preload-018891 san=[127.0.0.1 192.168.61.246 localhost minikube no-preload-018891]
	I0731 21:29:55.451740 1146656 provision.go:177] copyRemoteCerts
	I0731 21:29:55.451822 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:55.451858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.454972 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455327 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.455362 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455522 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.455783 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.455966 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.456166 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.541939 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:29:55.567967 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:55.593630 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:55.621511 1146656 provision.go:87] duration metric: took 324.845258ms to configureAuth
	I0731 21:29:55.621546 1146656 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:55.621737 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:29:55.621823 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.624639 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625021 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.625054 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625277 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.625515 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625755 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625921 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.626150 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.626404 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.626428 1146656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:55.896753 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:55.896785 1146656 machine.go:97] duration metric: took 983.934543ms to provisionDockerMachine
	I0731 21:29:55.896799 1146656 start.go:293] postStartSetup for "no-preload-018891" (driver="kvm2")
	I0731 21:29:55.896818 1146656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:55.896863 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:55.897196 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:55.897229 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.899769 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900156 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.900190 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.900612 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.900765 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.900903 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.987436 1146656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:55.991924 1146656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:55.991958 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:55.992027 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:55.992144 1146656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:55.992312 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:56.002524 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:56.026998 1146656 start.go:296] duration metric: took 130.182157ms for postStartSetup
	I0731 21:29:56.027046 1146656 fix.go:56] duration metric: took 18.009977848s for fixHost
	I0731 21:29:56.027071 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.029907 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030303 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.030324 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030493 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.030731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.030907 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.031055 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.031254 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:56.031490 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:56.031503 1146656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:56.149163 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461396.115095611
	
	I0731 21:29:56.149199 1146656 fix.go:216] guest clock: 1722461396.115095611
	I0731 21:29:56.149211 1146656 fix.go:229] Guest: 2024-07-31 21:29:56.115095611 +0000 UTC Remote: 2024-07-31 21:29:56.027049922 +0000 UTC m=+369.298206393 (delta=88.045689ms)
	I0731 21:29:56.149267 1146656 fix.go:200] guest clock delta is within tolerance: 88.045689ms
	I0731 21:29:56.149294 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 18.13224564s
	I0731 21:29:56.149320 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.149597 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:56.152941 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153307 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.153359 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153492 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154130 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154353 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154450 1146656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:56.154497 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.154650 1146656 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:56.154678 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.157376 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157795 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157838 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.157858 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158006 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158227 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158396 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.158422 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.158421 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158568 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158646 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.158731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158879 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.159051 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.241170 1146656 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:56.259519 1146656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:56.414823 1146656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:56.420732 1146656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:56.420805 1146656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:56.438423 1146656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:56.438461 1146656 start.go:495] detecting cgroup driver to use...
	I0731 21:29:56.438567 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:56.456069 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:56.471320 1146656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:56.471399 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:56.486206 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:56.501601 1146656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:56.623367 1146656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:56.774879 1146656 docker.go:233] disabling docker service ...
	I0731 21:29:56.774969 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:56.792295 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:56.809957 1146656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:56.961634 1146656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:57.102957 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:57.118907 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:57.139231 1146656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:29:57.139301 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.150471 1146656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:57.150547 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.160951 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.171556 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.182777 1146656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:57.196310 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.209689 1146656 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.227660 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.238058 1146656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:57.248326 1146656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:57.248388 1146656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:57.261076 1146656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:57.272002 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:57.406445 1146656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:57.540657 1146656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:57.540765 1146656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:57.546161 1146656 start.go:563] Will wait 60s for crictl version
	I0731 21:29:57.546233 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.550021 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:57.589152 1146656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:57.589272 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.618944 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.650646 1146656 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:29:54.766019 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:57.264179 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:59.264724 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:55.101321 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:55.600950 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.100785 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.601322 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.101431 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.101425 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.600958 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.100876 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.601349 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.837038 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.336837 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.836595 1148013 node_ready.go:49] node "default-k8s-diff-port-755535" has status "Ready":"True"
	I0731 21:30:00.836632 1148013 node_ready.go:38] duration metric: took 7.504184626s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:30:00.836644 1148013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:00.841523 1148013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846346 1148013 pod_ready.go:92] pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.846372 1148013 pod_ready.go:81] duration metric: took 4.815855ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846383 1148013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851118 1148013 pod_ready.go:92] pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.851140 1148013 pod_ready.go:81] duration metric: took 4.751019ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851151 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:57.651874 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:57.655070 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655529 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:57.655572 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655778 1146656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:57.659917 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:57.673863 1146656 kubeadm.go:883] updating cluster {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:57.674037 1146656 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:29:57.674099 1146656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:57.714187 1146656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:29:57.714225 1146656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:57.714285 1146656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.714317 1146656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.714345 1146656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.714370 1146656 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.714378 1146656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.714348 1146656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.714420 1146656 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.714458 1146656 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716109 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.716123 1146656 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.716147 1146656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.716161 1146656 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716168 1146656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.716119 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.716527 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.716549 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.848967 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.869777 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.881111 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 21:29:57.888022 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.892714 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.893611 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.908421 1146656 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 21:29:57.908493 1146656 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.908554 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.914040 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.985691 1146656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 21:29:57.985757 1146656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.985814 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.128813 1146656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 21:29:58.128930 1146656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.128947 1146656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 21:29:58.128996 1146656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.129046 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129061 1146656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 21:29:58.129088 1146656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.129115 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129000 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129194 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:58.129262 1146656 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 21:29:58.129309 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:58.129312 1146656 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.129389 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.141411 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.141477 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.212758 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 21:29:58.212783 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212847 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.212860 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.212928 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212933 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:29:58.226942 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227020 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.227057 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227113 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.265352 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 21:29:58.265470 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:29:58.276064 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276115 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276128 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 21:29:58.276150 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276176 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276186 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276213 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 21:29:58.276248 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.276359 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.280583 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 21:29:58.363934 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.050742 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.774531298s)
	I0731 21:30:01.050793 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 21:30:01.050832 1146656 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050926 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050839 1146656 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.686857972s)
	I0731 21:30:01.051031 1146656 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 21:30:01.051073 1146656 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.051118 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:30:01.266241 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:03.764462 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:00.101336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:00.601036 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.101381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.601371 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.100649 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.601354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.101316 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.101099 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.601146 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.860276 1148013 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:04.360452 1148013 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.360479 1148013 pod_ready.go:81] duration metric: took 3.509320908s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.360496 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367733 1148013 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.367757 1148013 pod_ready.go:81] duration metric: took 7.253266ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367768 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372693 1148013 pod_ready.go:92] pod "kube-proxy-mqcmt" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.372719 1148013 pod_ready.go:81] duration metric: took 4.944626ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372728 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436318 1148013 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.436345 1148013 pod_ready.go:81] duration metric: took 63.609569ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436356 1148013 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.339084 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.288125508s)
	I0731 21:30:04.339126 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 21:30:04.339141 1146656 ssh_runner.go:235] Completed: which crictl: (3.288000381s)
	I0731 21:30:04.339164 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:04.339223 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:04.339234 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:06.225796 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886536121s)
	I0731 21:30:06.225852 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 21:30:06.225875 1146656 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.886627424s)
	I0731 21:30:06.225900 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.225933 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 21:30:06.225987 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.226038 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:05.764555 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:07.766002 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:05.100624 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:05.600680 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.101286 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.601308 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.100801 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.600703 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.101252 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.601341 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.101049 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.601284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.443235 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.444797 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.950200 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.198750 1146656 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972673111s)
	I0731 21:30:08.198802 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 21:30:08.198831 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.972821334s)
	I0731 21:30:08.198850 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 21:30:08.198878 1146656 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:08.198956 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:10.054141 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.855149734s)
	I0731 21:30:10.054181 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 21:30:10.054209 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:10.054263 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:11.506212 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.45191421s)
	I0731 21:30:11.506252 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 21:30:11.506294 1146656 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:11.506390 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:10.263896 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.264903 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:14.265574 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.100825 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:10.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.101377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.601357 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.100679 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.600724 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.101278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.600992 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.101359 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.601364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.443063 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:15.443624 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.356725 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 21:30:12.356768 1146656 cache_images.go:123] Successfully loaded all cached images
	I0731 21:30:12.356773 1146656 cache_images.go:92] duration metric: took 14.642536081s to LoadCachedImages
	I0731 21:30:12.356786 1146656 kubeadm.go:934] updating node { 192.168.61.246 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:30:12.356931 1146656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-018891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:30:12.357036 1146656 ssh_runner.go:195] Run: crio config
	I0731 21:30:12.404684 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:12.404711 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:12.404728 1146656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:30:12.404752 1146656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.246 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018891 NodeName:no-preload-018891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:30:12.404917 1146656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018891"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:30:12.404999 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:30:12.416421 1146656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:30:12.416516 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:30:12.426572 1146656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 21:30:12.444613 1146656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:30:12.461161 1146656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 21:30:12.478872 1146656 ssh_runner.go:195] Run: grep 192.168.61.246	control-plane.minikube.internal$ /etc/hosts
	I0731 21:30:12.482736 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:30:12.502603 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:12.617670 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:12.634477 1146656 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891 for IP: 192.168.61.246
	I0731 21:30:12.634508 1146656 certs.go:194] generating shared ca certs ...
	I0731 21:30:12.634532 1146656 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:12.634740 1146656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:30:12.634799 1146656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:30:12.634813 1146656 certs.go:256] generating profile certs ...
	I0731 21:30:12.634961 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.key
	I0731 21:30:12.635052 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key.54e88c10
	I0731 21:30:12.635108 1146656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key
	I0731 21:30:12.635312 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:30:12.635379 1146656 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:30:12.635394 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:30:12.635433 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:30:12.635465 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:30:12.635500 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:30:12.635557 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:30:12.636406 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:30:12.672156 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:30:12.702346 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:30:12.731602 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:30:12.777601 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:30:12.813409 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:30:12.841076 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:30:12.866418 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:30:12.890716 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:30:12.915792 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:30:12.940826 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:30:12.966374 1146656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:30:12.984533 1146656 ssh_runner.go:195] Run: openssl version
	I0731 21:30:12.990538 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:30:13.002053 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006781 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006862 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.012728 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:30:13.024167 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:30:13.035617 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040041 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040150 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.046193 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:30:13.058141 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:30:13.070085 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074720 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074811 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.080498 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:30:13.092497 1146656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:30:13.097275 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:30:13.103762 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:30:13.110267 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:30:13.118325 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:30:13.124784 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:30:13.131502 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:30:13.138736 1146656 kubeadm.go:392] StartCluster: {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:30:13.138837 1146656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:30:13.138888 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.178222 1146656 cri.go:89] found id: ""
	I0731 21:30:13.178304 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:30:13.188552 1146656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:30:13.188580 1146656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:30:13.188634 1146656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:30:13.198424 1146656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:30:13.199620 1146656 kubeconfig.go:125] found "no-preload-018891" server: "https://192.168.61.246:8443"
	I0731 21:30:13.202067 1146656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:30:13.213244 1146656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.246
	I0731 21:30:13.213286 1146656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:30:13.213303 1146656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:30:13.213719 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.253396 1146656 cri.go:89] found id: ""
	I0731 21:30:13.253478 1146656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:30:13.270269 1146656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:30:13.280405 1146656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:30:13.280431 1146656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:30:13.280479 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:30:13.289979 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:30:13.290047 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:30:13.299871 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:30:13.309257 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:30:13.309342 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:30:13.319593 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.329418 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:30:13.329486 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.339419 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:30:13.348971 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:30:13.349036 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:30:13.358887 1146656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:30:13.368643 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:13.485786 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.401198 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.599529 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.677307 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.765353 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:30:14.765468 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.266329 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.766054 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.786157 1146656 api_server.go:72] duration metric: took 1.020803565s to wait for apiserver process to appear ...
	I0731 21:30:15.786189 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:30:15.786217 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:16.265710 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.766148 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.439856 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.439896 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.439914 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.492649 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.492690 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.787081 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.810263 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:18.810302 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.286734 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.291964 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:19.291999 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.786505 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.796699 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:30:19.807525 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:30:19.807566 1146656 api_server.go:131] duration metric: took 4.02136792s to wait for apiserver health ...
	I0731 21:30:19.807579 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:19.807588 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:19.809353 1146656 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:30:15.101218 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.600733 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.101137 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.601585 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.101343 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.101295 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.601307 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.100682 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.601155 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.942857 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.943771 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.810433 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:30:19.821002 1146656 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:30:19.868402 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:30:19.883129 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:30:19.883180 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:30:19.883192 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:30:19.883204 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:30:19.883212 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:30:19.883221 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:30:19.883229 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:30:19.883242 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:30:19.883261 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:30:19.883274 1146656 system_pods.go:74] duration metric: took 14.843323ms to wait for pod list to return data ...
	I0731 21:30:19.883284 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:30:19.897327 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:30:19.897368 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:30:19.897382 1146656 node_conditions.go:105] duration metric: took 14.091172ms to run NodePressure ...
	I0731 21:30:19.897407 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:20.196896 1146656 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:30:20.202966 1146656 kubeadm.go:739] kubelet initialised
	I0731 21:30:20.202990 1146656 kubeadm.go:740] duration metric: took 6.059782ms waiting for restarted kubelet to initialise ...
	I0731 21:30:20.203000 1146656 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:20.208123 1146656 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.214186 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214236 1146656 pod_ready.go:81] duration metric: took 6.07909ms for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.214247 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214253 1146656 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.220223 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220256 1146656 pod_ready.go:81] duration metric: took 5.988701ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.220267 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220273 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.228507 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228536 1146656 pod_ready.go:81] duration metric: took 8.255655ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.228545 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228553 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.272704 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272743 1146656 pod_ready.go:81] duration metric: took 44.182664ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.272755 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272777 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.673129 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673158 1146656 pod_ready.go:81] duration metric: took 400.361902ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.673170 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673177 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.072429 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072460 1146656 pod_ready.go:81] duration metric: took 399.27644ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.072471 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072478 1146656 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.472593 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472626 1146656 pod_ready.go:81] duration metric: took 400.13982ms for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.472637 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472645 1146656 pod_ready.go:38] duration metric: took 1.26963694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:21.472664 1146656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:30:21.484323 1146656 ops.go:34] apiserver oom_adj: -16
	I0731 21:30:21.484351 1146656 kubeadm.go:597] duration metric: took 8.295763074s to restartPrimaryControlPlane
	I0731 21:30:21.484361 1146656 kubeadm.go:394] duration metric: took 8.34563439s to StartCluster
	I0731 21:30:21.484379 1146656 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.484460 1146656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:30:21.486137 1146656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.486409 1146656 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:30:21.486485 1146656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:30:21.486584 1146656 addons.go:69] Setting storage-provisioner=true in profile "no-preload-018891"
	I0731 21:30:21.486615 1146656 addons.go:234] Setting addon storage-provisioner=true in "no-preload-018891"
	I0731 21:30:21.486646 1146656 addons.go:69] Setting metrics-server=true in profile "no-preload-018891"
	I0731 21:30:21.486692 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:30:21.486707 1146656 addons.go:234] Setting addon metrics-server=true in "no-preload-018891"
	W0731 21:30:21.486718 1146656 addons.go:243] addon metrics-server should already be in state true
	I0731 21:30:21.486759 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	W0731 21:30:21.486664 1146656 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:30:21.486850 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.486615 1146656 addons.go:69] Setting default-storageclass=true in profile "no-preload-018891"
	I0731 21:30:21.486954 1146656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018891"
	I0731 21:30:21.487107 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487150 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487230 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487267 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487371 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487406 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.488066 1146656 out.go:177] * Verifying Kubernetes components...
	I0731 21:30:21.489491 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:21.503876 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0731 21:30:21.504017 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0731 21:30:21.504086 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0731 21:30:21.504598 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504642 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504682 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.505173 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505193 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505199 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505213 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505305 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505327 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505554 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505629 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505639 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505831 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.506154 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506164 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.508914 1146656 addons.go:234] Setting addon default-storageclass=true in "no-preload-018891"
	W0731 21:30:21.508932 1146656 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:30:21.508957 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.509187 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.509213 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.526066 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0731 21:30:21.528731 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.529285 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.529311 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.529784 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.530000 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.532450 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.534700 1146656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:21.536115 1146656 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.536141 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:30:21.536170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.540044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540592 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.540622 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540851 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.541104 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.541270 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.541425 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.547128 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0731 21:30:21.547184 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0731 21:30:21.547786 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.547865 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.548426 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548445 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548429 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548466 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548780 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548845 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548959 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.549425 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.549473 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.551116 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.553068 1146656 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:30:21.554401 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:30:21.554418 1146656 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:30:21.554445 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.557987 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558385 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.558410 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558728 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.558976 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.559164 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.559326 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.569320 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0731 21:30:21.569956 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.570511 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.570534 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.571119 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.571339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.573316 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.573563 1146656 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.573585 1146656 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:30:21.573604 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.576643 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577012 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.577044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577214 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.577511 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.577688 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.577849 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.700050 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:21.717247 1146656 node_ready.go:35] waiting up to 6m0s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:21.798175 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.818043 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:30:21.818078 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:30:21.823805 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.862781 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:30:21.862812 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:30:21.898427 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:21.898457 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:30:21.948766 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:23.027256 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.229032744s)
	I0731 21:30:23.027318 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027322 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203467073s)
	I0731 21:30:23.027367 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027401 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.078593532s)
	I0731 21:30:23.027335 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027442 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027459 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027708 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027714 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027723 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027728 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027732 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027738 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027740 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027746 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027794 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027808 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027818 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.027827 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027991 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028003 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028037 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028056 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028061 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028071 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028081 1146656 addons.go:475] Verifying addon metrics-server=true in "no-preload-018891"
	I0731 21:30:23.028084 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.028119 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.034930 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.034965 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.035312 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.035333 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.035346 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.037042 1146656 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0731 21:30:21.264247 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.264691 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:20.100856 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:20.601336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.101059 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.100791 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.601360 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.600731 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.601285 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.945141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:24.442664 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.038375 1146656 addons.go:510] duration metric: took 1.551892195s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0731 21:30:23.721386 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.721450 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.264972 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:27.266151 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:25.101043 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:25.601045 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.101312 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.600559 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:27.100884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:27.100987 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:27.138104 1147424 cri.go:89] found id: ""
	I0731 21:30:27.138142 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.138154 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:27.138163 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:27.138233 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:27.175030 1147424 cri.go:89] found id: ""
	I0731 21:30:27.175068 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.175080 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:27.175088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:27.175158 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:27.209891 1147424 cri.go:89] found id: ""
	I0731 21:30:27.209925 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.209934 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:27.209941 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:27.209992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:27.247117 1147424 cri.go:89] found id: ""
	I0731 21:30:27.247154 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.247163 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:27.247170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:27.247236 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:27.286595 1147424 cri.go:89] found id: ""
	I0731 21:30:27.286625 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.286633 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:27.286639 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:27.286695 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:27.321169 1147424 cri.go:89] found id: ""
	I0731 21:30:27.321201 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.321218 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:27.321226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:27.321310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:27.356278 1147424 cri.go:89] found id: ""
	I0731 21:30:27.356306 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.356317 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:27.356323 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:27.356386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:27.390351 1147424 cri.go:89] found id: ""
	I0731 21:30:27.390378 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.390387 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:27.390398 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:27.390412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:27.440412 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:27.440451 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:27.454295 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:27.454330 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:27.575971 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:27.575999 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:27.576018 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:27.639090 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:27.639141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:26.442847 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.943311 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.221333 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:29.221116 1146656 node_ready.go:49] node "no-preload-018891" has status "Ready":"True"
	I0731 21:30:29.221150 1146656 node_ready.go:38] duration metric: took 7.50385465s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:29.221161 1146656 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:29.226655 1146656 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:31.233713 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:29.764835 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:31.764914 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:34.264305 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:30.177467 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:30.191103 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:30.191179 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:30.226529 1147424 cri.go:89] found id: ""
	I0731 21:30:30.226575 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.226584 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:30.226591 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:30.226653 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:30.262162 1147424 cri.go:89] found id: ""
	I0731 21:30:30.262193 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.262202 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:30.262209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:30.262275 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:30.301663 1147424 cri.go:89] found id: ""
	I0731 21:30:30.301698 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.301706 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:30.301713 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:30.301769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:30.342073 1147424 cri.go:89] found id: ""
	I0731 21:30:30.342105 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.342117 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:30.342125 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:30.342199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:30.375980 1147424 cri.go:89] found id: ""
	I0731 21:30:30.376013 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.376024 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:30.376033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:30.376114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:30.409852 1147424 cri.go:89] found id: ""
	I0731 21:30:30.409892 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.409900 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:30.409907 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:30.409960 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:30.444551 1147424 cri.go:89] found id: ""
	I0731 21:30:30.444592 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.444604 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:30.444612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:30.444672 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:30.481953 1147424 cri.go:89] found id: ""
	I0731 21:30:30.481987 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.481995 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:30.482006 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:30.482024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:30.533740 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:30.533785 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:30.546789 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:30.546831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:30.622294 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:30.622321 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:30.622338 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:30.693871 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:30.693922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.236318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:33.249452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:33.249545 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:33.288064 1147424 cri.go:89] found id: ""
	I0731 21:30:33.288110 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.288124 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:33.288133 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:33.288208 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:33.321269 1147424 cri.go:89] found id: ""
	I0731 21:30:33.321298 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.321307 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:33.321313 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:33.321368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:33.357078 1147424 cri.go:89] found id: ""
	I0731 21:30:33.357125 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.357133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:33.357140 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:33.357206 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:33.393556 1147424 cri.go:89] found id: ""
	I0731 21:30:33.393587 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.393598 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:33.393608 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:33.393674 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:33.427311 1147424 cri.go:89] found id: ""
	I0731 21:30:33.427347 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.427359 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:33.427368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:33.427438 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:33.462424 1147424 cri.go:89] found id: ""
	I0731 21:30:33.462463 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.462474 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:33.462482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:33.462557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:33.499271 1147424 cri.go:89] found id: ""
	I0731 21:30:33.499302 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.499311 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:33.499320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:33.499395 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:33.536341 1147424 cri.go:89] found id: ""
	I0731 21:30:33.536372 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.536382 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:33.536392 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:33.536406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:33.606582 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:33.606621 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:33.606640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:33.682704 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:33.682757 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.722410 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:33.722456 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:33.778845 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:33.778888 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:31.442470 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.443996 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:35.944317 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.735206 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.234503 1146656 pod_ready.go:92] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.234535 1146656 pod_ready.go:81] duration metric: took 7.007846047s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.234557 1146656 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240361 1146656 pod_ready.go:92] pod "etcd-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.240396 1146656 pod_ready.go:81] duration metric: took 5.830601ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240410 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246667 1146656 pod_ready.go:92] pod "kube-apiserver-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.246697 1146656 pod_ready.go:81] duration metric: took 6.278754ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246707 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252616 1146656 pod_ready.go:92] pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.252646 1146656 pod_ready.go:81] duration metric: took 5.931893ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252657 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257929 1146656 pod_ready.go:92] pod "kube-proxy-x2dnn" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.257962 1146656 pod_ready.go:81] duration metric: took 5.298921ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257976 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632686 1146656 pod_ready.go:92] pod "kube-scheduler-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.632723 1146656 pod_ready.go:81] duration metric: took 374.739035ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632737 1146656 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.265196 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.265807 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.293569 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:36.311120 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:36.311235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:36.350558 1147424 cri.go:89] found id: ""
	I0731 21:30:36.350589 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.350596 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:36.350602 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:36.350655 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:36.387804 1147424 cri.go:89] found id: ""
	I0731 21:30:36.387841 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.387849 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:36.387855 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:36.387912 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:36.427225 1147424 cri.go:89] found id: ""
	I0731 21:30:36.427263 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.427273 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:36.427280 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:36.427367 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:36.470864 1147424 cri.go:89] found id: ""
	I0731 21:30:36.470896 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.470908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:36.470917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:36.470985 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:36.523075 1147424 cri.go:89] found id: ""
	I0731 21:30:36.523109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.523117 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:36.523124 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:36.523188 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:36.598071 1147424 cri.go:89] found id: ""
	I0731 21:30:36.598109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.598120 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:36.598129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:36.598200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:36.638277 1147424 cri.go:89] found id: ""
	I0731 21:30:36.638314 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.638326 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:36.638335 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:36.638402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:36.673112 1147424 cri.go:89] found id: ""
	I0731 21:30:36.673152 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.673164 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:36.673180 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:36.673197 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:36.728197 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:36.728245 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:36.742034 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:36.742072 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:36.815584 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:36.815617 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:36.815635 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:36.894418 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:36.894464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.436637 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:39.449708 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:39.449823 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:39.490244 1147424 cri.go:89] found id: ""
	I0731 21:30:39.490281 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.490293 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:39.490301 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:39.490365 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:39.523568 1147424 cri.go:89] found id: ""
	I0731 21:30:39.523601 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.523625 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:39.523640 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:39.523723 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:39.558966 1147424 cri.go:89] found id: ""
	I0731 21:30:39.559004 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.559017 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:39.559025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:39.559092 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:39.592002 1147424 cri.go:89] found id: ""
	I0731 21:30:39.592037 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.592049 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:39.592058 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:39.592145 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:39.624596 1147424 cri.go:89] found id: ""
	I0731 21:30:39.624634 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.624646 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:39.624655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:39.624722 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:39.658928 1147424 cri.go:89] found id: ""
	I0731 21:30:39.658957 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.658965 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:39.658973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:39.659024 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:39.692725 1147424 cri.go:89] found id: ""
	I0731 21:30:39.692766 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.692779 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:39.692788 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:39.692857 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:39.728770 1147424 cri.go:89] found id: ""
	I0731 21:30:39.728811 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.728823 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:39.728837 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:39.728854 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:39.799162 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:39.799193 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:39.799213 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:38.443560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.942937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.638956 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.640407 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.764748 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.765335 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:39.884581 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:39.884625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.923650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:39.923687 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:39.977735 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:39.977787 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.491668 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:42.513530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:42.513623 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:42.563932 1147424 cri.go:89] found id: ""
	I0731 21:30:42.563968 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.563982 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:42.563991 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:42.564067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:42.598089 1147424 cri.go:89] found id: ""
	I0731 21:30:42.598122 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.598131 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:42.598138 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:42.598199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:42.631493 1147424 cri.go:89] found id: ""
	I0731 21:30:42.631528 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.631540 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:42.631549 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:42.631626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:42.668358 1147424 cri.go:89] found id: ""
	I0731 21:30:42.668395 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.668408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:42.668416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:42.668484 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:42.701115 1147424 cri.go:89] found id: ""
	I0731 21:30:42.701150 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.701161 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:42.701170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:42.701248 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:42.736626 1147424 cri.go:89] found id: ""
	I0731 21:30:42.736665 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.736678 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:42.736687 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:42.736759 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:42.769864 1147424 cri.go:89] found id: ""
	I0731 21:30:42.769897 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.769904 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:42.769910 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:42.769964 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:42.803441 1147424 cri.go:89] found id: ""
	I0731 21:30:42.803477 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.803486 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:42.803497 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:42.803514 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.817556 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:42.817591 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:42.885011 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:42.885040 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:42.885055 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:42.964799 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:42.964851 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:43.015621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:43.015675 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:42.942984 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.943126 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.641436 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.139036 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.766405 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:46.766520 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.265061 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.568268 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:45.580867 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:45.580952 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:45.614028 1147424 cri.go:89] found id: ""
	I0731 21:30:45.614066 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.614076 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:45.614082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:45.614152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:45.650207 1147424 cri.go:89] found id: ""
	I0731 21:30:45.650235 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.650245 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:45.650254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:45.650321 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:45.684405 1147424 cri.go:89] found id: ""
	I0731 21:30:45.684433 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.684444 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:45.684452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:45.684540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:45.718355 1147424 cri.go:89] found id: ""
	I0731 21:30:45.718397 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.718408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:45.718416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:45.718501 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:45.755484 1147424 cri.go:89] found id: ""
	I0731 21:30:45.755532 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.755554 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:45.755563 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:45.755638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:45.791243 1147424 cri.go:89] found id: ""
	I0731 21:30:45.791277 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.791290 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:45.791298 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:45.791368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:45.827118 1147424 cri.go:89] found id: ""
	I0731 21:30:45.827157 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.827169 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:45.827177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:45.827244 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:45.866131 1147424 cri.go:89] found id: ""
	I0731 21:30:45.866166 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.866177 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:45.866191 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:45.866207 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:45.919945 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:45.919988 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:45.935650 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:45.935685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:46.008387 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:46.008417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:46.008437 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:46.087063 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:46.087119 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:48.626079 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:48.639423 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:48.639502 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:48.673340 1147424 cri.go:89] found id: ""
	I0731 21:30:48.673371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.673380 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:48.673388 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:48.673457 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:48.707662 1147424 cri.go:89] found id: ""
	I0731 21:30:48.707694 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.707704 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:48.707712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:48.707786 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:48.741679 1147424 cri.go:89] found id: ""
	I0731 21:30:48.741716 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.741728 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:48.741736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:48.741807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:48.780939 1147424 cri.go:89] found id: ""
	I0731 21:30:48.780969 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.780980 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:48.780987 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:48.781050 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:48.818882 1147424 cri.go:89] found id: ""
	I0731 21:30:48.818912 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.818920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:48.818927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:48.818982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:48.858012 1147424 cri.go:89] found id: ""
	I0731 21:30:48.858044 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.858056 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:48.858065 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:48.858140 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:48.894753 1147424 cri.go:89] found id: ""
	I0731 21:30:48.894787 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.894795 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:48.894802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:48.894863 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:48.927020 1147424 cri.go:89] found id: ""
	I0731 21:30:48.927056 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.927066 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:48.927078 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:48.927099 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:48.983634 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:48.983678 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:48.998249 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:48.998280 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:49.068981 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:49.069006 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:49.069024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:49.154613 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:49.154658 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:46.943398 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:48.953937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:47.139335 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.139858 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.139967 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.764837 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.265088 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.693023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:51.706145 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:51.706246 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:51.737003 1147424 cri.go:89] found id: ""
	I0731 21:30:51.737032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.737041 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:51.737046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:51.737114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:51.772405 1147424 cri.go:89] found id: ""
	I0731 21:30:51.772441 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.772452 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:51.772461 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:51.772518 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.805868 1147424 cri.go:89] found id: ""
	I0731 21:30:51.805900 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.805910 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:51.805918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:51.805986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:51.841996 1147424 cri.go:89] found id: ""
	I0731 21:30:51.842032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.842045 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:51.842054 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:51.842130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:51.874698 1147424 cri.go:89] found id: ""
	I0731 21:30:51.874734 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.874746 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:51.874755 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:51.874824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:51.908924 1147424 cri.go:89] found id: ""
	I0731 21:30:51.908955 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.908967 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:51.908973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:51.909037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:51.945056 1147424 cri.go:89] found id: ""
	I0731 21:30:51.945085 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.945096 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:51.945104 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:51.945167 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:51.979480 1147424 cri.go:89] found id: ""
	I0731 21:30:51.979513 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.979538 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:51.979552 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:51.979571 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:52.055960 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:52.055992 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:52.056009 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:52.132988 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:52.133039 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:52.172054 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:52.172098 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:52.226311 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:52.226355 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:54.741919 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:54.755241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:54.755319 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:54.789532 1147424 cri.go:89] found id: ""
	I0731 21:30:54.789563 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.789574 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:54.789583 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:54.789652 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:54.824196 1147424 cri.go:89] found id: ""
	I0731 21:30:54.824229 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.824240 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:54.824248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:54.824314 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.443199 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.944480 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.140181 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:55.144767 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:56.265184 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.765513 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.860579 1147424 cri.go:89] found id: ""
	I0731 21:30:54.860611 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.860620 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:54.860627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:54.860679 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:54.897438 1147424 cri.go:89] found id: ""
	I0731 21:30:54.897472 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.897484 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:54.897493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:54.897569 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:54.935283 1147424 cri.go:89] found id: ""
	I0731 21:30:54.935318 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.935330 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:54.935339 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:54.935409 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:54.970819 1147424 cri.go:89] found id: ""
	I0731 21:30:54.970850 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.970858 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:54.970865 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:54.970916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:55.004983 1147424 cri.go:89] found id: ""
	I0731 21:30:55.005019 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.005029 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:55.005038 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:55.005111 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:55.040711 1147424 cri.go:89] found id: ""
	I0731 21:30:55.040740 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.040749 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:55.040760 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:55.040774 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:55.117255 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:55.117290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:55.117308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:55.195423 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:55.195466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:55.234017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:55.234050 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:55.287518 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:55.287562 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:57.802888 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:57.816049 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:57.816152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:57.849582 1147424 cri.go:89] found id: ""
	I0731 21:30:57.849616 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.849627 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:57.849635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:57.849713 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:57.883334 1147424 cri.go:89] found id: ""
	I0731 21:30:57.883371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.883382 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:57.883391 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:57.883459 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:57.917988 1147424 cri.go:89] found id: ""
	I0731 21:30:57.918018 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.918028 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:57.918034 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:57.918095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:57.956169 1147424 cri.go:89] found id: ""
	I0731 21:30:57.956205 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.956217 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:57.956229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:57.956296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:57.992259 1147424 cri.go:89] found id: ""
	I0731 21:30:57.992291 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.992301 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:57.992308 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:57.992371 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:58.027969 1147424 cri.go:89] found id: ""
	I0731 21:30:58.027996 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.028006 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:58.028013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:58.028065 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:58.063018 1147424 cri.go:89] found id: ""
	I0731 21:30:58.063048 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.063057 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:58.063064 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:58.063117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:58.097096 1147424 cri.go:89] found id: ""
	I0731 21:30:58.097131 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.097143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:58.097158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:58.097175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:58.137311 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:58.137341 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:58.186533 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:58.186575 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:58.200436 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:58.200469 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:58.270006 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:58.270033 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:58.270053 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:56.444446 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.942906 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.943227 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:57.639057 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.140108 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:01.265139 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:03.266080 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.855423 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:00.868032 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:00.868128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:00.901453 1147424 cri.go:89] found id: ""
	I0731 21:31:00.901486 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.901498 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:00.901506 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:00.901586 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:00.940566 1147424 cri.go:89] found id: ""
	I0731 21:31:00.940598 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.940614 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:00.940623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:00.940693 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:00.975729 1147424 cri.go:89] found id: ""
	I0731 21:31:00.975767 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.975778 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:00.975785 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:00.975852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:01.010713 1147424 cri.go:89] found id: ""
	I0731 21:31:01.010747 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.010759 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:01.010768 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:01.010842 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:01.044675 1147424 cri.go:89] found id: ""
	I0731 21:31:01.044709 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.044718 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:01.044725 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:01.044785 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:01.078574 1147424 cri.go:89] found id: ""
	I0731 21:31:01.078614 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.078625 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:01.078634 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:01.078696 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:01.116013 1147424 cri.go:89] found id: ""
	I0731 21:31:01.116051 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.116062 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:01.116071 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:01.116161 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:01.152596 1147424 cri.go:89] found id: ""
	I0731 21:31:01.152631 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.152640 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:01.152650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:01.152666 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:01.203674 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:01.203726 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:01.218212 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:01.218261 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:01.290579 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:01.290604 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:01.290621 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:01.369885 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:01.369929 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:03.910280 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:03.923195 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:03.923276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:03.958378 1147424 cri.go:89] found id: ""
	I0731 21:31:03.958411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.958420 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:03.958427 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:03.958496 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:03.993096 1147424 cri.go:89] found id: ""
	I0731 21:31:03.993128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.993139 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:03.993148 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:03.993219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:04.029519 1147424 cri.go:89] found id: ""
	I0731 21:31:04.029552 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.029561 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:04.029569 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:04.029625 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:04.065597 1147424 cri.go:89] found id: ""
	I0731 21:31:04.065633 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.065643 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:04.065652 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:04.065719 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:04.101708 1147424 cri.go:89] found id: ""
	I0731 21:31:04.101744 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.101755 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:04.101763 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:04.101835 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:04.137732 1147424 cri.go:89] found id: ""
	I0731 21:31:04.137773 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.137783 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:04.137792 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:04.137866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:04.173141 1147424 cri.go:89] found id: ""
	I0731 21:31:04.173173 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.173188 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:04.173197 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:04.173269 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:04.208707 1147424 cri.go:89] found id: ""
	I0731 21:31:04.208742 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.208753 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:04.208770 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:04.208789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:04.279384 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:04.279417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:04.279498 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:04.362158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:04.362203 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:04.401372 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:04.401412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:04.453988 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:04.454047 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:03.443745 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.942529 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:02.639283 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:04.639372 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.765358 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:08.265854 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:06.968373 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:06.982182 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:06.982268 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:07.018082 1147424 cri.go:89] found id: ""
	I0731 21:31:07.018112 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.018122 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:07.018129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:07.018197 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:07.050272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.050309 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.050319 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:07.050325 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:07.050392 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:07.085174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.085206 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.085215 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:07.085221 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:07.085285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:07.119239 1147424 cri.go:89] found id: ""
	I0731 21:31:07.119274 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.119282 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:07.119289 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:07.119353 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:07.156846 1147424 cri.go:89] found id: ""
	I0731 21:31:07.156876 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.156883 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:07.156889 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:07.156942 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:07.191272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.191305 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.191314 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:07.191320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:07.191384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:07.231174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.231209 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.231221 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:07.231231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:07.231295 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:07.266525 1147424 cri.go:89] found id: ""
	I0731 21:31:07.266551 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.266558 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:07.266567 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:07.266589 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:07.306626 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:07.306659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:07.360568 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:07.360625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:07.374630 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:07.374665 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:07.444054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:07.444081 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:07.444118 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:07.942657 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.943080 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:07.140848 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.639749 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.266538 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.268527 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.030591 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:10.043498 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:10.043571 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:10.076835 1147424 cri.go:89] found id: ""
	I0731 21:31:10.076875 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.076887 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:10.076897 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:10.076966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:10.111342 1147424 cri.go:89] found id: ""
	I0731 21:31:10.111384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.111396 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:10.111404 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:10.111473 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:10.146858 1147424 cri.go:89] found id: ""
	I0731 21:31:10.146896 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.146911 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:10.146920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:10.146989 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:10.180682 1147424 cri.go:89] found id: ""
	I0731 21:31:10.180717 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.180729 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:10.180738 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:10.180804 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:10.215147 1147424 cri.go:89] found id: ""
	I0731 21:31:10.215177 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.215186 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:10.215192 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:10.215249 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:10.248291 1147424 cri.go:89] found id: ""
	I0731 21:31:10.248327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.248336 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:10.248343 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:10.248398 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:10.284207 1147424 cri.go:89] found id: ""
	I0731 21:31:10.284241 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.284252 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:10.284259 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:10.284325 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:10.318286 1147424 cri.go:89] found id: ""
	I0731 21:31:10.318322 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.318331 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:10.318342 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:10.318356 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:10.368429 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:10.368476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:10.383638 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:10.383673 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:10.450696 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:10.450720 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:10.450742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:10.530413 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:10.530458 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.084947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:13.098074 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:13.098156 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:13.132915 1147424 cri.go:89] found id: ""
	I0731 21:31:13.132952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.132962 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:13.132968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:13.133037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:13.173568 1147424 cri.go:89] found id: ""
	I0731 21:31:13.173597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.173605 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:13.173612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:13.173668 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:13.207356 1147424 cri.go:89] found id: ""
	I0731 21:31:13.207388 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.207402 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:13.207411 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:13.207478 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:13.243452 1147424 cri.go:89] found id: ""
	I0731 21:31:13.243482 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.243493 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:13.243502 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:13.243587 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:13.278682 1147424 cri.go:89] found id: ""
	I0731 21:31:13.278719 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.278729 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:13.278736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:13.278794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:13.312698 1147424 cri.go:89] found id: ""
	I0731 21:31:13.312727 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.312735 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:13.312742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:13.312796 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:13.346223 1147424 cri.go:89] found id: ""
	I0731 21:31:13.346259 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.346270 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:13.346279 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:13.346350 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:13.380778 1147424 cri.go:89] found id: ""
	I0731 21:31:13.380819 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.380833 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:13.380847 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:13.380889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:13.394337 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:13.394372 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:13.472260 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:13.472290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:13.472308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:13.549561 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:13.549608 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.589373 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:13.589416 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:11.943150 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.443284 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.140029 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.641142 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.765639 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.265180 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.265765 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:16.143472 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:16.155966 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:16.156039 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:16.194187 1147424 cri.go:89] found id: ""
	I0731 21:31:16.194216 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.194224 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:16.194231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:16.194299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:16.228700 1147424 cri.go:89] found id: ""
	I0731 21:31:16.228738 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.228751 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:16.228760 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:16.228844 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:16.261597 1147424 cri.go:89] found id: ""
	I0731 21:31:16.261629 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.261640 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:16.261647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:16.261716 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:16.299664 1147424 cri.go:89] found id: ""
	I0731 21:31:16.299697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.299709 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:16.299718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:16.299780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:16.350144 1147424 cri.go:89] found id: ""
	I0731 21:31:16.350172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.350181 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:16.350188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:16.350254 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:16.385259 1147424 cri.go:89] found id: ""
	I0731 21:31:16.385294 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.385303 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:16.385310 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:16.385364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:16.419555 1147424 cri.go:89] found id: ""
	I0731 21:31:16.419597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.419610 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:16.419619 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:16.419714 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:16.455956 1147424 cri.go:89] found id: ""
	I0731 21:31:16.455993 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.456005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:16.456029 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:16.456048 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:16.493234 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:16.493269 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:16.544931 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:16.544975 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:16.559513 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:16.559553 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:16.625127 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.625158 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:16.625176 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.200306 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:19.213303 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:19.213393 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:19.247139 1147424 cri.go:89] found id: ""
	I0731 21:31:19.247171 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.247179 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:19.247186 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:19.247245 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:19.282630 1147424 cri.go:89] found id: ""
	I0731 21:31:19.282659 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.282668 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:19.282674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:19.282740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:19.317287 1147424 cri.go:89] found id: ""
	I0731 21:31:19.317327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.317338 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:19.317345 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:19.317410 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:19.352680 1147424 cri.go:89] found id: ""
	I0731 21:31:19.352718 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.352738 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:19.352747 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:19.352820 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:19.385653 1147424 cri.go:89] found id: ""
	I0731 21:31:19.385697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.385709 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:19.385718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:19.385794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:19.425552 1147424 cri.go:89] found id: ""
	I0731 21:31:19.425582 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.425591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:19.425598 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:19.425654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:19.461717 1147424 cri.go:89] found id: ""
	I0731 21:31:19.461753 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.461766 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:19.461775 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:19.461852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:19.497504 1147424 cri.go:89] found id: ""
	I0731 21:31:19.497542 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.497554 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:19.497567 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:19.497592 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.571818 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:19.571867 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:19.611053 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:19.611091 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:19.662174 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:19.662220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:19.676489 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:19.676526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:19.750718 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.943653 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.443833 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.140073 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.639048 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.639213 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.764897 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.765013 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:22.251175 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:22.265094 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:22.265186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:22.298628 1147424 cri.go:89] found id: ""
	I0731 21:31:22.298665 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.298676 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:22.298684 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:22.298754 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:22.336851 1147424 cri.go:89] found id: ""
	I0731 21:31:22.336888 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.336900 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:22.336909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:22.336982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:22.373362 1147424 cri.go:89] found id: ""
	I0731 21:31:22.373397 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.373409 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:22.373417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:22.373498 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:22.409578 1147424 cri.go:89] found id: ""
	I0731 21:31:22.409606 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.409614 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:22.409621 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:22.409675 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:22.446427 1147424 cri.go:89] found id: ""
	I0731 21:31:22.446458 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.446469 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:22.446477 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:22.446547 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:22.480629 1147424 cri.go:89] found id: ""
	I0731 21:31:22.480679 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.480691 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:22.480700 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:22.480769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:22.515017 1147424 cri.go:89] found id: ""
	I0731 21:31:22.515058 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.515070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:22.515078 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:22.515151 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:22.552433 1147424 cri.go:89] found id: ""
	I0731 21:31:22.552462 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.552470 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:22.552480 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:22.552493 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:22.567822 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:22.567862 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:22.640554 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:22.640585 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:22.640603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:22.732714 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:22.732776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:22.790478 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:22.790515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:21.941836 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.945561 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.639434 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.640934 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.765376 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.264346 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.352413 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:25.364739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:25.364828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:25.398119 1147424 cri.go:89] found id: ""
	I0731 21:31:25.398158 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.398171 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:25.398184 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:25.398255 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:25.432874 1147424 cri.go:89] found id: ""
	I0731 21:31:25.432908 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.432919 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:25.432928 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:25.432986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:25.467669 1147424 cri.go:89] found id: ""
	I0731 21:31:25.467702 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.467711 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:25.467717 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:25.467783 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:25.502331 1147424 cri.go:89] found id: ""
	I0731 21:31:25.502364 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.502373 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:25.502379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:25.502434 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:25.535888 1147424 cri.go:89] found id: ""
	I0731 21:31:25.535917 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.535924 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:25.535931 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:25.535990 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:25.568398 1147424 cri.go:89] found id: ""
	I0731 21:31:25.568427 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.568443 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:25.568451 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:25.568554 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:25.602724 1147424 cri.go:89] found id: ""
	I0731 21:31:25.602751 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.602759 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:25.602766 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:25.602825 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:25.635990 1147424 cri.go:89] found id: ""
	I0731 21:31:25.636021 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.636032 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:25.636045 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:25.636063 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:25.687984 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:25.688030 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:25.702979 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:25.703010 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:25.768470 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:25.768498 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:25.768519 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:25.845432 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:25.845481 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.383725 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:28.397046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:28.397130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:28.436675 1147424 cri.go:89] found id: ""
	I0731 21:31:28.436707 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.436716 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:28.436723 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:28.436780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:28.474084 1147424 cri.go:89] found id: ""
	I0731 21:31:28.474114 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.474122 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:28.474129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:28.474186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:28.512448 1147424 cri.go:89] found id: ""
	I0731 21:31:28.512485 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.512496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:28.512505 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:28.512575 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:28.557548 1147424 cri.go:89] found id: ""
	I0731 21:31:28.557579 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.557591 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:28.557599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:28.557664 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:28.600492 1147424 cri.go:89] found id: ""
	I0731 21:31:28.600526 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.600545 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:28.600553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:28.600628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:28.645067 1147424 cri.go:89] found id: ""
	I0731 21:31:28.645093 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.645101 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:28.645107 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:28.645171 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:28.678391 1147424 cri.go:89] found id: ""
	I0731 21:31:28.678431 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.678444 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:28.678452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:28.678522 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:28.712230 1147424 cri.go:89] found id: ""
	I0731 21:31:28.712260 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.712268 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:28.712278 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:28.712297 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:28.779362 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:28.779389 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:28.779403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:28.861192 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:28.861243 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.900747 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:28.900781 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:28.953135 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:28.953183 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:26.442998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.443518 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.943322 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.139072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.638724 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.264991 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.764482 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:31.467806 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:31.481274 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:31.481345 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:31.516704 1147424 cri.go:89] found id: ""
	I0731 21:31:31.516741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.516754 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:31.516765 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:31.516824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:31.553299 1147424 cri.go:89] found id: ""
	I0731 21:31:31.553332 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.553341 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:31.553348 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:31.553402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:31.587834 1147424 cri.go:89] found id: ""
	I0731 21:31:31.587864 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.587874 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:31.587881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:31.587939 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:31.623164 1147424 cri.go:89] found id: ""
	I0731 21:31:31.623194 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.623203 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:31.623209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:31.623265 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:31.659118 1147424 cri.go:89] found id: ""
	I0731 21:31:31.659151 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.659158 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:31.659165 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:31.659219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:31.697260 1147424 cri.go:89] found id: ""
	I0731 21:31:31.697297 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.697308 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:31.697317 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:31.697375 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:31.732585 1147424 cri.go:89] found id: ""
	I0731 21:31:31.732623 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.732635 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:31.732644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:31.732698 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:31.770922 1147424 cri.go:89] found id: ""
	I0731 21:31:31.770952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.770964 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:31.770976 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:31.770992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:31.823747 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:31.823805 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:31.837367 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:31.837406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:31.912937 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:31.912958 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:31.912972 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:31.991008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:31.991061 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.528933 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:34.552722 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:34.552807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:34.587277 1147424 cri.go:89] found id: ""
	I0731 21:31:34.587315 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.587326 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:34.587337 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:34.587417 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:34.619919 1147424 cri.go:89] found id: ""
	I0731 21:31:34.619952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.619961 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:34.619968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:34.620033 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:34.654967 1147424 cri.go:89] found id: ""
	I0731 21:31:34.655000 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.655007 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:34.655014 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:34.655066 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:34.689092 1147424 cri.go:89] found id: ""
	I0731 21:31:34.689128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.689139 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:34.689147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:34.689217 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:34.725112 1147424 cri.go:89] found id: ""
	I0731 21:31:34.725145 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.725153 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:34.725159 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:34.725215 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:34.760377 1147424 cri.go:89] found id: ""
	I0731 21:31:34.760411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.760422 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:34.760430 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:34.760500 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:34.796413 1147424 cri.go:89] found id: ""
	I0731 21:31:34.796445 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.796460 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:34.796468 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:34.796540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:34.833243 1147424 cri.go:89] found id: ""
	I0731 21:31:34.833277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.833288 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:34.833309 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:34.833328 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:32.943881 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:35.442928 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.638850 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.640521 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.766140 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.264336 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.268433 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.911486 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:34.911552 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.952167 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:34.952200 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:35.010995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:35.011041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:35.025756 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:35.025795 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:35.110465 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:37.610914 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:37.623848 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:37.623935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:37.660355 1147424 cri.go:89] found id: ""
	I0731 21:31:37.660384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.660392 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:37.660398 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:37.660456 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:37.694935 1147424 cri.go:89] found id: ""
	I0731 21:31:37.694966 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.694975 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:37.694982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:37.695048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:37.729438 1147424 cri.go:89] found id: ""
	I0731 21:31:37.729472 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.729485 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:37.729493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:37.729570 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:37.766412 1147424 cri.go:89] found id: ""
	I0731 21:31:37.766440 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.766449 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:37.766457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:37.766519 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:37.803830 1147424 cri.go:89] found id: ""
	I0731 21:31:37.803865 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.803875 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:37.803884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:37.803956 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:37.838698 1147424 cri.go:89] found id: ""
	I0731 21:31:37.838730 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.838741 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:37.838749 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:37.838819 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:37.873274 1147424 cri.go:89] found id: ""
	I0731 21:31:37.873312 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.873324 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:37.873332 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:37.873404 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:37.907801 1147424 cri.go:89] found id: ""
	I0731 21:31:37.907835 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.907859 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:37.907870 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:37.907893 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:37.962192 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:37.962233 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:37.976530 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:37.976577 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:38.048551 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:38.048584 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:38.048603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:38.122957 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:38.123003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:37.942944 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.442336 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.139834 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.141085 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.640176 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.766169 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:43.767226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.663623 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:40.677119 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:40.677184 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:40.710893 1147424 cri.go:89] found id: ""
	I0731 21:31:40.710923 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.710932 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:40.710939 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:40.710996 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:40.746166 1147424 cri.go:89] found id: ""
	I0731 21:31:40.746203 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.746216 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:40.746223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:40.746296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:40.789323 1147424 cri.go:89] found id: ""
	I0731 21:31:40.789353 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.789362 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:40.789368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:40.789433 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:40.826731 1147424 cri.go:89] found id: ""
	I0731 21:31:40.826766 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.826775 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:40.826782 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:40.826843 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:40.865533 1147424 cri.go:89] found id: ""
	I0731 21:31:40.865562 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.865570 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:40.865576 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:40.865628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:40.900523 1147424 cri.go:89] found id: ""
	I0731 21:31:40.900555 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.900564 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:40.900571 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:40.900628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:40.934140 1147424 cri.go:89] found id: ""
	I0731 21:31:40.934172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.934181 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:40.934187 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:40.934252 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:40.969989 1147424 cri.go:89] found id: ""
	I0731 21:31:40.970033 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.970045 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:40.970058 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:40.970076 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:41.021416 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:41.021464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:41.035947 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:41.035978 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:41.102101 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:41.102126 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:41.102141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:41.182412 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:41.182457 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:43.727586 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:43.740633 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:43.740725 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:43.775305 1147424 cri.go:89] found id: ""
	I0731 21:31:43.775343 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.775354 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:43.775363 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:43.775426 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:43.813410 1147424 cri.go:89] found id: ""
	I0731 21:31:43.813441 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.813449 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:43.813455 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:43.813510 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:43.848924 1147424 cri.go:89] found id: ""
	I0731 21:31:43.848959 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.848971 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:43.848979 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:43.849048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:43.884911 1147424 cri.go:89] found id: ""
	I0731 21:31:43.884950 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.884962 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:43.884971 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:43.885041 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:43.918244 1147424 cri.go:89] found id: ""
	I0731 21:31:43.918277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.918286 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:43.918292 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:43.918348 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:43.952166 1147424 cri.go:89] found id: ""
	I0731 21:31:43.952200 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.952211 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:43.952220 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:43.952299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:43.985756 1147424 cri.go:89] found id: ""
	I0731 21:31:43.985790 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.985850 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:43.985863 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:43.985916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:44.020480 1147424 cri.go:89] found id: ""
	I0731 21:31:44.020516 1147424 logs.go:276] 0 containers: []
	W0731 21:31:44.020528 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:44.020542 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:44.020560 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:44.058344 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:44.058398 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:44.110703 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:44.110751 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:44.124735 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:44.124771 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:44.193412 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:44.193445 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:44.193463 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:42.442910 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.443829 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.140083 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.640177 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.265466 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.265667 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.775651 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:46.789288 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:46.789384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.822997 1147424 cri.go:89] found id: ""
	I0731 21:31:46.823032 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.823044 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:46.823053 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:46.823123 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:46.857000 1147424 cri.go:89] found id: ""
	I0731 21:31:46.857030 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.857039 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:46.857046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:46.857112 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:46.890362 1147424 cri.go:89] found id: ""
	I0731 21:31:46.890392 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.890404 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:46.890417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:46.890483 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:46.922819 1147424 cri.go:89] found id: ""
	I0731 21:31:46.922848 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.922864 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:46.922871 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:46.922935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:46.957333 1147424 cri.go:89] found id: ""
	I0731 21:31:46.957363 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.957371 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:46.957376 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:46.957444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:46.990795 1147424 cri.go:89] found id: ""
	I0731 21:31:46.990830 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.990840 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:46.990849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:46.990922 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:47.025144 1147424 cri.go:89] found id: ""
	I0731 21:31:47.025174 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.025185 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:47.025194 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:47.025263 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:47.062624 1147424 cri.go:89] found id: ""
	I0731 21:31:47.062658 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.062667 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:47.062677 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:47.062691 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:47.112698 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:47.112742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:47.127240 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:47.127276 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:47.195034 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:47.195062 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:47.195081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:47.277532 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:47.277574 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:49.814610 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:49.828213 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:49.828291 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.944364 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.442118 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.640243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.640580 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.764302 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:52.764441 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.861951 1147424 cri.go:89] found id: ""
	I0731 21:31:49.861982 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.861991 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:49.861998 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:49.862054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:49.898601 1147424 cri.go:89] found id: ""
	I0731 21:31:49.898630 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.898638 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:49.898644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:49.898711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:49.933615 1147424 cri.go:89] found id: ""
	I0731 21:31:49.933652 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.933665 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:49.933673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:49.933742 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:49.970356 1147424 cri.go:89] found id: ""
	I0731 21:31:49.970395 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.970416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:49.970425 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:49.970503 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:50.004186 1147424 cri.go:89] found id: ""
	I0731 21:31:50.004220 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.004232 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:50.004241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:50.004316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:50.037701 1147424 cri.go:89] found id: ""
	I0731 21:31:50.037741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.037753 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:50.037761 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:50.037834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:50.074358 1147424 cri.go:89] found id: ""
	I0731 21:31:50.074390 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.074399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:50.074409 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:50.074474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:50.109052 1147424 cri.go:89] found id: ""
	I0731 21:31:50.109083 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.109091 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:50.109101 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:50.109116 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:50.167891 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:50.167935 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:50.181132 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:50.181179 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:50.247835 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:50.247865 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:50.247882 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:50.328733 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:50.328779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:52.867344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:52.880275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:52.880355 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:52.913980 1147424 cri.go:89] found id: ""
	I0731 21:31:52.914015 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.914024 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:52.914030 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:52.914095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:52.947833 1147424 cri.go:89] found id: ""
	I0731 21:31:52.947866 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.947874 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:52.947880 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:52.947947 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:52.981345 1147424 cri.go:89] found id: ""
	I0731 21:31:52.981380 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.981393 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:52.981401 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:52.981470 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:53.016253 1147424 cri.go:89] found id: ""
	I0731 21:31:53.016283 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.016292 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:53.016299 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:53.016351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:53.049683 1147424 cri.go:89] found id: ""
	I0731 21:31:53.049716 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.049726 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:53.049734 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:53.049807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:53.082171 1147424 cri.go:89] found id: ""
	I0731 21:31:53.082217 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.082228 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:53.082237 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:53.082308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:53.114595 1147424 cri.go:89] found id: ""
	I0731 21:31:53.114640 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.114658 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:53.114667 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:53.114739 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:53.151612 1147424 cri.go:89] found id: ""
	I0731 21:31:53.151644 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.151672 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:53.151686 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:53.151702 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:53.203251 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:53.203293 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:53.219234 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:53.219272 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:53.290273 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:53.290292 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:53.290306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:53.367967 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:53.368023 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:51.443058 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.943272 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.141370 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.638859 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.264069 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.265286 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.909173 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:55.922278 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:55.922351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:55.959354 1147424 cri.go:89] found id: ""
	I0731 21:31:55.959389 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.959397 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:55.959403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:55.959467 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:55.998507 1147424 cri.go:89] found id: ""
	I0731 21:31:55.998544 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.998557 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:55.998566 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:55.998638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:56.034763 1147424 cri.go:89] found id: ""
	I0731 21:31:56.034811 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.034824 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:56.034833 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:56.034914 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:56.068685 1147424 cri.go:89] found id: ""
	I0731 21:31:56.068726 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.068737 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:56.068746 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:56.068833 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:56.105785 1147424 cri.go:89] found id: ""
	I0731 21:31:56.105824 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.105837 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:56.105845 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:56.105920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:56.142701 1147424 cri.go:89] found id: ""
	I0731 21:31:56.142732 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.142744 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:56.142752 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:56.142834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:56.177016 1147424 cri.go:89] found id: ""
	I0731 21:31:56.177064 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.177077 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:56.177089 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:56.177163 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:56.211989 1147424 cri.go:89] found id: ""
	I0731 21:31:56.212026 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.212038 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:56.212052 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:56.212070 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:56.263995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:56.264045 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:56.277535 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:56.277570 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:56.343150 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:56.343179 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:56.343199 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:56.425361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:56.425406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:58.965276 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:58.978115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:58.978190 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:59.011793 1147424 cri.go:89] found id: ""
	I0731 21:31:59.011829 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.011840 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:59.011849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:59.011921 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:59.048117 1147424 cri.go:89] found id: ""
	I0731 21:31:59.048153 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.048164 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:59.048172 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:59.048240 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:59.081955 1147424 cri.go:89] found id: ""
	I0731 21:31:59.081985 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.081996 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:59.082004 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:59.082072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:59.116269 1147424 cri.go:89] found id: ""
	I0731 21:31:59.116308 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.116321 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:59.116330 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:59.116396 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:59.152551 1147424 cri.go:89] found id: ""
	I0731 21:31:59.152580 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.152592 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:59.152599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:59.152669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:59.186708 1147424 cri.go:89] found id: ""
	I0731 21:31:59.186749 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.186758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:59.186764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:59.186830 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:59.223628 1147424 cri.go:89] found id: ""
	I0731 21:31:59.223681 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.223690 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:59.223698 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:59.223773 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:59.256867 1147424 cri.go:89] found id: ""
	I0731 21:31:59.256901 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.256913 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:59.256925 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:59.256944 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:59.307167 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:59.307209 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:59.320958 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:59.320992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:59.390776 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:59.390798 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:59.390813 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:59.467482 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:59.467534 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:56.445461 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:58.943434 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.639271 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:00.139778 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:59.764344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:01.765157 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:04.264512 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.005084 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:02.017546 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:02.017635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:02.053094 1147424 cri.go:89] found id: ""
	I0731 21:32:02.053135 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.053146 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:02.053155 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:02.053212 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:02.087483 1147424 cri.go:89] found id: ""
	I0731 21:32:02.087517 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.087535 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:02.087543 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:02.087600 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:02.123647 1147424 cri.go:89] found id: ""
	I0731 21:32:02.123685 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.123696 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:02.123706 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:02.123764 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:02.157798 1147424 cri.go:89] found id: ""
	I0731 21:32:02.157828 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.157837 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:02.157843 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:02.157899 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:02.190266 1147424 cri.go:89] found id: ""
	I0731 21:32:02.190297 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.190309 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:02.190318 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:02.190377 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:02.232507 1147424 cri.go:89] found id: ""
	I0731 21:32:02.232537 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.232546 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:02.232552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:02.232605 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:02.270105 1147424 cri.go:89] found id: ""
	I0731 21:32:02.270133 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.270144 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:02.270152 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:02.270221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:02.304599 1147424 cri.go:89] found id: ""
	I0731 21:32:02.304631 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.304642 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:02.304654 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:02.304671 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:02.356686 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:02.356727 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:02.370114 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:02.370147 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:02.437753 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:02.437778 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:02.437797 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:02.518085 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:02.518131 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:01.443142 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:03.943209 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.640855 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.141191 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:06.265050 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.265389 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.071289 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:05.084496 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:05.084579 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:05.124178 1147424 cri.go:89] found id: ""
	I0731 21:32:05.124208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.124218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:05.124224 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:05.124279 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:05.162119 1147424 cri.go:89] found id: ""
	I0731 21:32:05.162155 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.162167 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:05.162173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:05.162237 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:05.198445 1147424 cri.go:89] found id: ""
	I0731 21:32:05.198483 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.198496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:05.198504 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:05.198615 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:05.240678 1147424 cri.go:89] found id: ""
	I0731 21:32:05.240702 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.240711 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:05.240718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:05.240770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:05.276910 1147424 cri.go:89] found id: ""
	I0731 21:32:05.276942 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.276965 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:05.276974 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:05.277051 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:05.310130 1147424 cri.go:89] found id: ""
	I0731 21:32:05.310158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.310166 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:05.310173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:05.310227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:05.345144 1147424 cri.go:89] found id: ""
	I0731 21:32:05.345179 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.345191 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:05.345199 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:05.345267 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:05.386723 1147424 cri.go:89] found id: ""
	I0731 21:32:05.386766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.386778 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:05.386792 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:05.386809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:05.425852 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:05.425887 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:05.482401 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:05.482447 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:05.495888 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:05.495918 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:05.562121 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:05.562153 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:05.562174 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.140837 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:08.153503 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:08.153585 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:08.187113 1147424 cri.go:89] found id: ""
	I0731 21:32:08.187143 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.187155 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:08.187164 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:08.187226 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:08.219853 1147424 cri.go:89] found id: ""
	I0731 21:32:08.219888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.219898 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:08.219906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:08.219976 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:08.253817 1147424 cri.go:89] found id: ""
	I0731 21:32:08.253848 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.253857 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:08.253864 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:08.253930 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:08.307069 1147424 cri.go:89] found id: ""
	I0731 21:32:08.307096 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.307104 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:08.307111 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:08.307176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:08.349604 1147424 cri.go:89] found id: ""
	I0731 21:32:08.349632 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.349641 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:08.349648 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:08.349711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:08.382966 1147424 cri.go:89] found id: ""
	I0731 21:32:08.383000 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.383013 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:08.383022 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:08.383080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:08.416904 1147424 cri.go:89] found id: ""
	I0731 21:32:08.416938 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.416950 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:08.416958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:08.417021 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:08.451024 1147424 cri.go:89] found id: ""
	I0731 21:32:08.451061 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.451074 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:08.451087 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:08.451103 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.530394 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:08.530441 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:08.567554 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:08.567583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:08.616162 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:08.616208 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:08.629228 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:08.629264 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:08.700820 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:06.441762 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.443004 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.942870 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:07.638970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.139278 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.764866 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:13.265303 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:11.201091 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:11.213847 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:11.213920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:11.248925 1147424 cri.go:89] found id: ""
	I0731 21:32:11.248963 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.248974 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:11.248982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:11.249054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:11.286134 1147424 cri.go:89] found id: ""
	I0731 21:32:11.286168 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.286185 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:11.286193 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:11.286261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:11.321493 1147424 cri.go:89] found id: ""
	I0731 21:32:11.321524 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.321534 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:11.321542 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:11.321610 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:11.356679 1147424 cri.go:89] found id: ""
	I0731 21:32:11.356708 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.356724 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:11.356731 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:11.356788 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:11.390757 1147424 cri.go:89] found id: ""
	I0731 21:32:11.390785 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.390795 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:11.390802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:11.390868 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:11.424687 1147424 cri.go:89] found id: ""
	I0731 21:32:11.424724 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.424736 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:11.424745 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:11.424816 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:11.458542 1147424 cri.go:89] found id: ""
	I0731 21:32:11.458579 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.458590 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:11.458599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:11.458678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:11.490956 1147424 cri.go:89] found id: ""
	I0731 21:32:11.490999 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.491009 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:11.491020 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:11.491036 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:11.541013 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:11.541057 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:11.554729 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:11.554760 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:11.619828 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:11.619868 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:11.619894 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:11.697785 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:11.697837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.235153 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:14.247701 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:14.247770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:14.282802 1147424 cri.go:89] found id: ""
	I0731 21:32:14.282835 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.282846 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:14.282854 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:14.282926 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:14.316106 1147424 cri.go:89] found id: ""
	I0731 21:32:14.316158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.316168 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:14.316175 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:14.316235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:14.349319 1147424 cri.go:89] found id: ""
	I0731 21:32:14.349358 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.349370 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:14.349379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:14.349446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:14.385630 1147424 cri.go:89] found id: ""
	I0731 21:32:14.385665 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.385674 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:14.385681 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:14.385745 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:14.422054 1147424 cri.go:89] found id: ""
	I0731 21:32:14.422090 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.422104 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:14.422113 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:14.422176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:14.456170 1147424 cri.go:89] found id: ""
	I0731 21:32:14.456207 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.456216 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:14.456223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:14.456283 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:14.489571 1147424 cri.go:89] found id: ""
	I0731 21:32:14.489611 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.489622 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:14.489632 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:14.489709 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:14.524764 1147424 cri.go:89] found id: ""
	I0731 21:32:14.524803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.524814 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:14.524827 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:14.524843 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:14.598487 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:14.598511 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:14.598526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:14.675912 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:14.675954 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.722740 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:14.722778 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:14.780558 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:14.780604 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:13.441757 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.442992 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:12.140024 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:14.638468 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:16.639109 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.764963 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.265010 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:17.300221 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:17.313242 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:17.313309 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:17.349244 1147424 cri.go:89] found id: ""
	I0731 21:32:17.349276 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.349284 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:17.349293 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:17.349364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:17.382158 1147424 cri.go:89] found id: ""
	I0731 21:32:17.382188 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.382196 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:17.382203 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:17.382276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:17.416250 1147424 cri.go:89] found id: ""
	I0731 21:32:17.416283 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.416295 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:17.416304 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:17.416363 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:17.449192 1147424 cri.go:89] found id: ""
	I0731 21:32:17.449229 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.449240 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:17.449249 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:17.449316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:17.482189 1147424 cri.go:89] found id: ""
	I0731 21:32:17.482223 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.482235 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:17.482244 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:17.482308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:17.516284 1147424 cri.go:89] found id: ""
	I0731 21:32:17.516312 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.516320 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:17.516327 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:17.516380 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:17.550025 1147424 cri.go:89] found id: ""
	I0731 21:32:17.550059 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.550070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:17.550077 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:17.550142 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:17.582378 1147424 cri.go:89] found id: ""
	I0731 21:32:17.582411 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.582424 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:17.582488 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:17.582513 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:17.635593 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:17.635640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:17.649694 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:17.649734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:17.716275 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:17.716301 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:17.716316 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:17.800261 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:17.800327 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:17.942859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:19.943179 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.639313 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.639947 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.265670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:22.764461 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.339222 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:20.353494 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:20.353574 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:20.387397 1147424 cri.go:89] found id: ""
	I0731 21:32:20.387432 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.387441 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:20.387449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:20.387534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:20.421038 1147424 cri.go:89] found id: ""
	I0731 21:32:20.421074 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.421082 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:20.421088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:20.421200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:20.461171 1147424 cri.go:89] found id: ""
	I0731 21:32:20.461208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.461221 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:20.461229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:20.461297 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:20.529655 1147424 cri.go:89] found id: ""
	I0731 21:32:20.529692 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.529704 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:20.529712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:20.529779 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:20.584293 1147424 cri.go:89] found id: ""
	I0731 21:32:20.584327 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.584337 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:20.584344 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:20.584399 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:20.617177 1147424 cri.go:89] found id: ""
	I0731 21:32:20.617209 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.617220 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:20.617226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:20.617282 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:20.657058 1147424 cri.go:89] found id: ""
	I0731 21:32:20.657094 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.657104 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:20.657112 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:20.657181 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:20.689987 1147424 cri.go:89] found id: ""
	I0731 21:32:20.690016 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.690026 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:20.690038 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:20.690058 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:20.702274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:20.702310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:20.766054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:20.766088 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:20.766106 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:20.850776 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:20.850823 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:20.888735 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:20.888766 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.440658 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:23.453529 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:23.453616 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:23.487210 1147424 cri.go:89] found id: ""
	I0731 21:32:23.487249 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.487263 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:23.487271 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:23.487338 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:23.520656 1147424 cri.go:89] found id: ""
	I0731 21:32:23.520697 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.520709 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:23.520718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:23.520794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:23.557952 1147424 cri.go:89] found id: ""
	I0731 21:32:23.557982 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.557991 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:23.557999 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:23.558052 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:23.591428 1147424 cri.go:89] found id: ""
	I0731 21:32:23.591458 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.591466 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:23.591473 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:23.591537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:23.624978 1147424 cri.go:89] found id: ""
	I0731 21:32:23.625009 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.625019 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:23.625026 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:23.625080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:23.659424 1147424 cri.go:89] found id: ""
	I0731 21:32:23.659460 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.659473 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:23.659482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:23.659557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:23.696695 1147424 cri.go:89] found id: ""
	I0731 21:32:23.696733 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.696745 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:23.696753 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:23.696818 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:23.734067 1147424 cri.go:89] found id: ""
	I0731 21:32:23.734097 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.734106 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:23.734116 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:23.734130 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.787432 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:23.787476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:23.801116 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:23.801154 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:23.867801 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:23.867840 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:23.867859 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:23.952393 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:23.952435 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:22.442859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:24.943043 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:23.139590 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.140770 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.264790 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.763670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:26.490759 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:26.503050 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:26.503120 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:26.536191 1147424 cri.go:89] found id: ""
	I0731 21:32:26.536239 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.536251 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:26.536260 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:26.536330 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:26.571038 1147424 cri.go:89] found id: ""
	I0731 21:32:26.571075 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.571088 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:26.571096 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:26.571164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:26.605295 1147424 cri.go:89] found id: ""
	I0731 21:32:26.605333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.605346 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:26.605355 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:26.605422 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:26.644430 1147424 cri.go:89] found id: ""
	I0731 21:32:26.644472 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.644482 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:26.644489 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:26.644553 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:26.675985 1147424 cri.go:89] found id: ""
	I0731 21:32:26.676020 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.676033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:26.676041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:26.676128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:26.707738 1147424 cri.go:89] found id: ""
	I0731 21:32:26.707766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.707780 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:26.707787 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:26.707850 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:26.743969 1147424 cri.go:89] found id: ""
	I0731 21:32:26.743998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.744007 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:26.744013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:26.744067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:26.782301 1147424 cri.go:89] found id: ""
	I0731 21:32:26.782333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.782346 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:26.782361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:26.782377 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:26.818548 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:26.818580 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.870586 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:26.870632 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:26.883944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:26.883983 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:26.951603 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:26.951630 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:26.951648 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:29.527796 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:29.540627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:29.540862 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:29.575513 1147424 cri.go:89] found id: ""
	I0731 21:32:29.575544 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.575553 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:29.575559 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:29.575627 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:29.607395 1147424 cri.go:89] found id: ""
	I0731 21:32:29.607425 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.607434 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:29.607440 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:29.607505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:29.641509 1147424 cri.go:89] found id: ""
	I0731 21:32:29.641539 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.641548 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:29.641553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:29.641604 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:29.673166 1147424 cri.go:89] found id: ""
	I0731 21:32:29.673197 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.673207 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:29.673215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:29.673285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:29.703698 1147424 cri.go:89] found id: ""
	I0731 21:32:29.703744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.703752 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:29.703759 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:29.703821 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:29.738704 1147424 cri.go:89] found id: ""
	I0731 21:32:29.738746 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.738758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:29.738767 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:29.738858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:29.771359 1147424 cri.go:89] found id: ""
	I0731 21:32:29.771388 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.771399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:29.771407 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:29.771474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:29.806579 1147424 cri.go:89] found id: ""
	I0731 21:32:29.806614 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.806625 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:29.806635 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:29.806649 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.943079 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.442599 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.638623 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.639949 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.764393 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:31.764649 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.764888 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.857957 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:29.857994 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:29.871348 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:29.871387 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:29.942833 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:29.942864 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:29.942880 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:30.027254 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:30.027306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:32.565077 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:32.577796 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:32.577878 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:32.611725 1147424 cri.go:89] found id: ""
	I0731 21:32:32.611762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.611774 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:32.611783 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:32.611859 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:32.647901 1147424 cri.go:89] found id: ""
	I0731 21:32:32.647939 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.647951 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:32.647959 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:32.648018 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:32.681042 1147424 cri.go:89] found id: ""
	I0731 21:32:32.681073 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.681084 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:32.681091 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:32.681162 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:32.716141 1147424 cri.go:89] found id: ""
	I0731 21:32:32.716173 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.716182 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:32.716188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:32.716242 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:32.753207 1147424 cri.go:89] found id: ""
	I0731 21:32:32.753236 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.753244 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:32.753250 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:32.753301 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:32.787591 1147424 cri.go:89] found id: ""
	I0731 21:32:32.787619 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.787628 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:32.787635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:32.787717 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:32.822430 1147424 cri.go:89] found id: ""
	I0731 21:32:32.822464 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.822476 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:32.822484 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:32.822544 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:32.854566 1147424 cri.go:89] found id: ""
	I0731 21:32:32.854600 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.854609 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:32.854621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:32.854636 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:32.905256 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:32.905310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:32.918575 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:32.918607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:32.981644 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:32.981669 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:32.981685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:33.062767 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:33.062814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:31.443380 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.942793 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.943502 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:32.139483 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:34.140185 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.638720 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.264481 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.265008 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.599598 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:35.612328 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:35.612403 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:35.647395 1147424 cri.go:89] found id: ""
	I0731 21:32:35.647428 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.647439 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:35.647448 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:35.647514 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:35.682339 1147424 cri.go:89] found id: ""
	I0731 21:32:35.682370 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.682378 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:35.682384 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:35.682440 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:35.721727 1147424 cri.go:89] found id: ""
	I0731 21:32:35.721762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.721775 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:35.721784 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:35.721866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:35.754648 1147424 cri.go:89] found id: ""
	I0731 21:32:35.754678 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.754688 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:35.754697 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:35.754761 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:35.787880 1147424 cri.go:89] found id: ""
	I0731 21:32:35.787910 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.787922 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:35.787930 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:35.788004 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:35.822619 1147424 cri.go:89] found id: ""
	I0731 21:32:35.822656 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.822668 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:35.822677 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:35.822743 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:35.856160 1147424 cri.go:89] found id: ""
	I0731 21:32:35.856198 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.856210 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:35.856219 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:35.856284 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:35.888842 1147424 cri.go:89] found id: ""
	I0731 21:32:35.888881 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.888893 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:35.888906 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:35.888924 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:35.956296 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:35.956323 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:35.956342 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:36.039485 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:36.039531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:36.081202 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:36.081247 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:36.130789 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:36.130831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:38.647723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:38.660334 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:38.660405 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:38.696782 1147424 cri.go:89] found id: ""
	I0731 21:32:38.696813 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.696822 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:38.696828 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:38.696887 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:38.731835 1147424 cri.go:89] found id: ""
	I0731 21:32:38.731874 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.731887 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:38.731895 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:38.731969 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:38.768894 1147424 cri.go:89] found id: ""
	I0731 21:32:38.768924 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.768935 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:38.768943 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:38.769012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:38.802331 1147424 cri.go:89] found id: ""
	I0731 21:32:38.802361 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.802370 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:38.802377 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:38.802430 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:38.835822 1147424 cri.go:89] found id: ""
	I0731 21:32:38.835852 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.835864 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:38.835881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:38.835940 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:38.869104 1147424 cri.go:89] found id: ""
	I0731 21:32:38.869141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.869153 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:38.869162 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:38.869234 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:38.907732 1147424 cri.go:89] found id: ""
	I0731 21:32:38.907769 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.907781 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:38.907789 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:38.907858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:38.942961 1147424 cri.go:89] found id: ""
	I0731 21:32:38.942994 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.943005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:38.943017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:38.943032 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:38.997537 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:38.997584 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:39.011711 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:39.011745 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:39.082834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:39.082861 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:39.082878 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:39.168702 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:39.168758 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:38.442196 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.943085 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.639586 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.140158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.764887 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.265118 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.706713 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:41.720209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:41.720298 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:41.752969 1147424 cri.go:89] found id: ""
	I0731 21:32:41.753005 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.753016 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:41.753025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:41.753095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:41.786502 1147424 cri.go:89] found id: ""
	I0731 21:32:41.786542 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.786555 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:41.786564 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:41.786635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:41.819958 1147424 cri.go:89] found id: ""
	I0731 21:32:41.819989 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.820000 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:41.820008 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:41.820073 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:41.855104 1147424 cri.go:89] found id: ""
	I0731 21:32:41.855141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.855153 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:41.855161 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:41.855228 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:41.889375 1147424 cri.go:89] found id: ""
	I0731 21:32:41.889413 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.889423 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:41.889429 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:41.889505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:41.925172 1147424 cri.go:89] found id: ""
	I0731 21:32:41.925199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.925208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:41.925215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:41.925278 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:41.960951 1147424 cri.go:89] found id: ""
	I0731 21:32:41.960995 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.961009 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:41.961017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:41.961086 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:41.996458 1147424 cri.go:89] found id: ""
	I0731 21:32:41.996493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.996506 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:41.996519 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:41.996537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:42.048841 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:42.048889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:42.062235 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:42.062271 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:42.131510 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:42.131536 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:42.131551 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:42.216993 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:42.217035 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:44.756236 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:44.769719 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:44.769800 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:44.808963 1147424 cri.go:89] found id: ""
	I0731 21:32:44.808998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.809009 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:44.809017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:44.809095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:44.843163 1147424 cri.go:89] found id: ""
	I0731 21:32:44.843199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.843212 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:44.843225 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:44.843287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:42.943536 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.443141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.140264 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.140607 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.764757 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.765226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:44.877440 1147424 cri.go:89] found id: ""
	I0731 21:32:44.877468 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.877477 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:44.877483 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:44.877537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:44.911877 1147424 cri.go:89] found id: ""
	I0731 21:32:44.911906 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.911915 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:44.911922 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:44.911974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:44.945516 1147424 cri.go:89] found id: ""
	I0731 21:32:44.945547 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.945558 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:44.945565 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:44.945634 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:44.983858 1147424 cri.go:89] found id: ""
	I0731 21:32:44.983890 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.983898 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:44.983906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:44.983981 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:45.017030 1147424 cri.go:89] found id: ""
	I0731 21:32:45.017064 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.017075 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:45.017084 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:45.017154 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:45.051005 1147424 cri.go:89] found id: ""
	I0731 21:32:45.051040 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.051053 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:45.051064 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:45.051077 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:45.100602 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:45.100646 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:45.113843 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:45.113891 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:45.187725 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:45.187760 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:45.187779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:45.273549 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:45.273588 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:47.813567 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:47.826674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:47.826762 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:47.863746 1147424 cri.go:89] found id: ""
	I0731 21:32:47.863781 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.863789 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:47.863797 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:47.863860 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:47.901125 1147424 cri.go:89] found id: ""
	I0731 21:32:47.901158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.901169 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:47.901177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:47.901247 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:47.936510 1147424 cri.go:89] found id: ""
	I0731 21:32:47.936543 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.936553 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:47.936560 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:47.936618 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:47.972712 1147424 cri.go:89] found id: ""
	I0731 21:32:47.972744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.972754 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:47.972764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:47.972828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:48.007785 1147424 cri.go:89] found id: ""
	I0731 21:32:48.007818 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.007831 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:48.007839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:48.007907 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:48.045821 1147424 cri.go:89] found id: ""
	I0731 21:32:48.045851 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.045863 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:48.045872 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:48.045945 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:48.083790 1147424 cri.go:89] found id: ""
	I0731 21:32:48.083823 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.083832 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:48.083839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:48.083903 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:48.122430 1147424 cri.go:89] found id: ""
	I0731 21:32:48.122465 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.122477 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:48.122490 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:48.122505 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:48.200081 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:48.200140 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:48.240500 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:48.240537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:48.292336 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:48.292393 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:48.305398 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:48.305431 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:48.381327 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:47.943158 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.945740 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.638897 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.640039 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.269263 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.765262 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.881554 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:50.894655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:50.894740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:50.928819 1147424 cri.go:89] found id: ""
	I0731 21:32:50.928861 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.928873 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:50.928882 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:50.928950 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:50.962856 1147424 cri.go:89] found id: ""
	I0731 21:32:50.962897 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.962908 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:50.962917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:50.962980 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:50.995765 1147424 cri.go:89] found id: ""
	I0731 21:32:50.995803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.995815 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:50.995823 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:50.995892 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:51.034418 1147424 cri.go:89] found id: ""
	I0731 21:32:51.034454 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.034467 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:51.034476 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:51.034534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:51.070687 1147424 cri.go:89] found id: ""
	I0731 21:32:51.070723 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.070732 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:51.070739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:51.070828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:51.106934 1147424 cri.go:89] found id: ""
	I0731 21:32:51.106959 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.106966 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:51.106973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:51.107026 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:51.143489 1147424 cri.go:89] found id: ""
	I0731 21:32:51.143513 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.143522 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:51.143530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:51.143591 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:51.180778 1147424 cri.go:89] found id: ""
	I0731 21:32:51.180806 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.180816 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:51.180827 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:51.180842 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:51.194695 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:51.194734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:51.262172 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:51.262200 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:51.262220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:51.344678 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:51.344719 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:51.383624 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:51.383659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:53.936339 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:53.950362 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:53.950446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:53.984346 1147424 cri.go:89] found id: ""
	I0731 21:32:53.984376 1147424 logs.go:276] 0 containers: []
	W0731 21:32:53.984391 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:53.984403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:53.984481 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:54.019937 1147424 cri.go:89] found id: ""
	I0731 21:32:54.019973 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.019986 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:54.019994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:54.020070 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:54.056068 1147424 cri.go:89] found id: ""
	I0731 21:32:54.056120 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.056133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:54.056142 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:54.056221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:54.094375 1147424 cri.go:89] found id: ""
	I0731 21:32:54.094407 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.094416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:54.094422 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:54.094486 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:54.130326 1147424 cri.go:89] found id: ""
	I0731 21:32:54.130362 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.130374 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:54.130383 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:54.130444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:54.168190 1147424 cri.go:89] found id: ""
	I0731 21:32:54.168228 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.168239 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:54.168248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:54.168329 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:54.201946 1147424 cri.go:89] found id: ""
	I0731 21:32:54.201979 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.201988 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:54.201994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:54.202055 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:54.233852 1147424 cri.go:89] found id: ""
	I0731 21:32:54.233888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.233896 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:54.233907 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:54.233922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:54.287620 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:54.287664 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:54.309984 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:54.310019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:54.382751 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:54.382774 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:54.382789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:54.460042 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:54.460105 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:52.443844 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.943970 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.140449 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.141072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:56.639439 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:55.264301 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.265478 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.002945 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:57.015673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:57.015763 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:57.049464 1147424 cri.go:89] found id: ""
	I0731 21:32:57.049493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.049502 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:57.049509 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:57.049561 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:57.083326 1147424 cri.go:89] found id: ""
	I0731 21:32:57.083356 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.083365 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:57.083371 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:57.083431 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:57.115103 1147424 cri.go:89] found id: ""
	I0731 21:32:57.115132 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.115141 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:57.115147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:57.115200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:57.153178 1147424 cri.go:89] found id: ""
	I0731 21:32:57.153214 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.153226 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:57.153234 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:57.153310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:57.187940 1147424 cri.go:89] found id: ""
	I0731 21:32:57.187980 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.187992 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:57.188001 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:57.188072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:57.221825 1147424 cri.go:89] found id: ""
	I0731 21:32:57.221858 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.221868 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:57.221884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:57.221948 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:57.255087 1147424 cri.go:89] found id: ""
	I0731 21:32:57.255115 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.255128 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:57.255137 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:57.255207 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:57.290095 1147424 cri.go:89] found id: ""
	I0731 21:32:57.290131 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.290143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:57.290157 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:57.290175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:57.343777 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:57.343819 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:57.356944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:57.356981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:57.431220 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:57.431248 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:57.431267 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:57.518079 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:57.518123 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:57.442671 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.942490 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:58.639801 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.139266 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.764738 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.765367 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:04.265447 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:00.056208 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:00.069424 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:00.069511 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:00.105855 1147424 cri.go:89] found id: ""
	I0731 21:33:00.105891 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.105902 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:00.105909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:00.105984 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:00.143079 1147424 cri.go:89] found id: ""
	I0731 21:33:00.143109 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.143120 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:00.143128 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:00.143195 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:00.178114 1147424 cri.go:89] found id: ""
	I0731 21:33:00.178150 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.178162 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:00.178171 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:00.178235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:00.212518 1147424 cri.go:89] found id: ""
	I0731 21:33:00.212547 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.212556 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:00.212562 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:00.212626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:00.246653 1147424 cri.go:89] found id: ""
	I0731 21:33:00.246683 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.246693 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:00.246702 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:00.246795 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:00.280163 1147424 cri.go:89] found id: ""
	I0731 21:33:00.280196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.280208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:00.280216 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:00.280285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:00.313593 1147424 cri.go:89] found id: ""
	I0731 21:33:00.313622 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.313631 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:00.313637 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:00.313691 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:00.347809 1147424 cri.go:89] found id: ""
	I0731 21:33:00.347838 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.347846 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:00.347858 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:00.347870 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:00.360481 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:00.360515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:00.433834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:00.433855 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:00.433869 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:00.513679 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:00.513721 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:00.551415 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:00.551466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.101928 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:03.114183 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:03.114262 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:03.152397 1147424 cri.go:89] found id: ""
	I0731 21:33:03.152427 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.152442 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:03.152449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:03.152505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:03.186595 1147424 cri.go:89] found id: ""
	I0731 21:33:03.186626 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.186640 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:03.186647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:03.186700 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:03.219085 1147424 cri.go:89] found id: ""
	I0731 21:33:03.219116 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.219126 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:03.219135 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:03.219201 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:03.251541 1147424 cri.go:89] found id: ""
	I0731 21:33:03.251573 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.251583 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:03.251592 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:03.251660 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:03.287880 1147424 cri.go:89] found id: ""
	I0731 21:33:03.287911 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.287920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:03.287927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:03.287992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:03.320317 1147424 cri.go:89] found id: ""
	I0731 21:33:03.320352 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.320361 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:03.320367 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:03.320423 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:03.355185 1147424 cri.go:89] found id: ""
	I0731 21:33:03.355213 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.355222 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:03.355228 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:03.355281 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:03.389900 1147424 cri.go:89] found id: ""
	I0731 21:33:03.389933 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.389941 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:03.389951 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:03.389985 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:03.427299 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:03.427331 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.480994 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:03.481037 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:03.494372 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:03.494403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:03.565542 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:03.565568 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:03.565583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:01.942941 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.943391 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.140871 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:05.141254 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.764762 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.264188 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.146397 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:06.159705 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:06.159791 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:06.195594 1147424 cri.go:89] found id: ""
	I0731 21:33:06.195628 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.195640 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:06.195649 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:06.195726 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:06.230163 1147424 cri.go:89] found id: ""
	I0731 21:33:06.230216 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.230229 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:06.230239 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:06.230313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:06.266937 1147424 cri.go:89] found id: ""
	I0731 21:33:06.266968 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.266979 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:06.266986 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:06.267048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:06.299791 1147424 cri.go:89] found id: ""
	I0731 21:33:06.299828 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.299838 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:06.299849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:06.299906 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:06.333861 1147424 cri.go:89] found id: ""
	I0731 21:33:06.333900 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.333912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:06.333920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:06.333991 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:06.366156 1147424 cri.go:89] found id: ""
	I0731 21:33:06.366196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.366208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:06.366217 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:06.366292 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:06.400567 1147424 cri.go:89] found id: ""
	I0731 21:33:06.400598 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.400607 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:06.400613 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:06.400665 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:06.443745 1147424 cri.go:89] found id: ""
	I0731 21:33:06.443771 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.443782 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:06.443794 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:06.443809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:06.530140 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:06.530189 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.570842 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:06.570883 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:06.621760 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:06.621800 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:06.636562 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:06.636602 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:06.702451 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.203607 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:09.215590 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:09.215678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:09.253063 1147424 cri.go:89] found id: ""
	I0731 21:33:09.253092 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.253101 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:09.253108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:09.253159 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:09.287000 1147424 cri.go:89] found id: ""
	I0731 21:33:09.287036 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.287051 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:09.287060 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:09.287117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:09.321173 1147424 cri.go:89] found id: ""
	I0731 21:33:09.321211 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.321223 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:09.321232 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:09.321287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:09.356860 1147424 cri.go:89] found id: ""
	I0731 21:33:09.356896 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.356908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:09.356918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:09.356979 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:09.390469 1147424 cri.go:89] found id: ""
	I0731 21:33:09.390509 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.390520 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:09.390528 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:09.390601 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:09.426265 1147424 cri.go:89] found id: ""
	I0731 21:33:09.426295 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.426304 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:09.426311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:09.426376 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:09.460197 1147424 cri.go:89] found id: ""
	I0731 21:33:09.460234 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.460246 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:09.460254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:09.460313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:09.492708 1147424 cri.go:89] found id: ""
	I0731 21:33:09.492737 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.492745 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:09.492757 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:09.492769 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:09.543768 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:09.543814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:09.557496 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:09.557531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:09.622956 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.622994 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:09.623012 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:09.700157 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:09.700202 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.443888 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:08.942866 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:07.638676 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.639158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.639719 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.264932 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.763994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:12.238767 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:12.258742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:12.258829 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:12.319452 1147424 cri.go:89] found id: ""
	I0731 21:33:12.319501 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.319514 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:12.319523 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:12.319596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:12.353740 1147424 cri.go:89] found id: ""
	I0731 21:33:12.353777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.353789 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:12.353798 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:12.353872 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:12.387735 1147424 cri.go:89] found id: ""
	I0731 21:33:12.387777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.387790 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:12.387799 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:12.387864 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:12.420145 1147424 cri.go:89] found id: ""
	I0731 21:33:12.420184 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.420196 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:12.420204 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:12.420261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:12.454861 1147424 cri.go:89] found id: ""
	I0731 21:33:12.454899 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.454912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:12.454920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:12.454993 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:12.487910 1147424 cri.go:89] found id: ""
	I0731 21:33:12.487938 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.487946 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:12.487954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:12.488007 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:12.524634 1147424 cri.go:89] found id: ""
	I0731 21:33:12.524663 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.524672 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:12.524678 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:12.524747 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:12.557542 1147424 cri.go:89] found id: ""
	I0731 21:33:12.557572 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.557581 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:12.557592 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:12.557605 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:12.638725 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:12.638767 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:12.675009 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:12.675041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:12.725508 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:12.725556 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:12.739281 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:12.739315 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:12.809186 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:11.443163 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.942775 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.944913 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:14.140466 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:16.639513 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.764068 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:17.764557 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.310278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:15.323392 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:15.323489 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:15.356737 1147424 cri.go:89] found id: ""
	I0731 21:33:15.356768 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.356779 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:15.356794 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:15.356870 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:15.389979 1147424 cri.go:89] found id: ""
	I0731 21:33:15.390018 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.390027 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:15.390033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:15.390097 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:15.422777 1147424 cri.go:89] found id: ""
	I0731 21:33:15.422810 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.422818 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:15.422825 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:15.422880 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:15.457962 1147424 cri.go:89] found id: ""
	I0731 21:33:15.458000 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.458012 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:15.458021 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:15.458088 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:15.495495 1147424 cri.go:89] found id: ""
	I0731 21:33:15.495528 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.495539 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:15.495552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:15.495611 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:15.528671 1147424 cri.go:89] found id: ""
	I0731 21:33:15.528700 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.528709 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:15.528715 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:15.528782 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:15.562579 1147424 cri.go:89] found id: ""
	I0731 21:33:15.562609 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.562617 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:15.562623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:15.562688 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:15.597326 1147424 cri.go:89] found id: ""
	I0731 21:33:15.597362 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.597374 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:15.597387 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:15.597406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:15.611017 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:15.611049 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:15.679729 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:15.679756 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:15.679776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:15.763719 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:15.763764 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:15.801974 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:15.802003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.350340 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:18.362952 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:18.363030 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:18.396153 1147424 cri.go:89] found id: ""
	I0731 21:33:18.396207 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.396218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:18.396227 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:18.396300 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:18.429261 1147424 cri.go:89] found id: ""
	I0731 21:33:18.429291 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.429302 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:18.429311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:18.429386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:18.462056 1147424 cri.go:89] found id: ""
	I0731 21:33:18.462093 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.462105 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:18.462115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:18.462189 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:18.494847 1147424 cri.go:89] found id: ""
	I0731 21:33:18.494887 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.494900 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:18.494908 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:18.494974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:18.527982 1147424 cri.go:89] found id: ""
	I0731 21:33:18.528020 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.528033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:18.528041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:18.528137 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:18.562114 1147424 cri.go:89] found id: ""
	I0731 21:33:18.562148 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.562159 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:18.562168 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:18.562227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:18.600226 1147424 cri.go:89] found id: ""
	I0731 21:33:18.600256 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.600267 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:18.600275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:18.600346 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:18.635899 1147424 cri.go:89] found id: ""
	I0731 21:33:18.635935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.635947 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:18.635960 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:18.635976 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.687338 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:18.687380 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:18.700274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:18.700308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:18.772852 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:18.772882 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:18.772900 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:18.854876 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:18.854919 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:18.442684 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:20.942998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.139878 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.139917 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.764588 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.765547 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:22.759208 1147232 pod_ready.go:81] duration metric: took 4m0.00082409s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	E0731 21:33:22.759249 1147232 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:33:22.759276 1147232 pod_ready.go:38] duration metric: took 4m11.578718686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:33:22.759313 1147232 kubeadm.go:597] duration metric: took 4m19.399292481s to restartPrimaryControlPlane
	W0731 21:33:22.759429 1147232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:22.759478 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:21.392589 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:21.405646 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:21.405767 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:21.441055 1147424 cri.go:89] found id: ""
	I0731 21:33:21.441088 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.441100 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:21.441108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:21.441173 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:21.474545 1147424 cri.go:89] found id: ""
	I0731 21:33:21.474583 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.474593 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:21.474599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:21.474654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:21.506004 1147424 cri.go:89] found id: ""
	I0731 21:33:21.506032 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.506041 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:21.506047 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:21.506115 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:21.539842 1147424 cri.go:89] found id: ""
	I0731 21:33:21.539880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.539893 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:21.539902 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:21.539966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:21.573913 1147424 cri.go:89] found id: ""
	I0731 21:33:21.573943 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.573951 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:21.573958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:21.574012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:21.608677 1147424 cri.go:89] found id: ""
	I0731 21:33:21.608715 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.608727 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:21.608736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:21.608811 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:21.642032 1147424 cri.go:89] found id: ""
	I0731 21:33:21.642063 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.642073 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:21.642082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:21.642146 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:21.676279 1147424 cri.go:89] found id: ""
	I0731 21:33:21.676312 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.676322 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:21.676332 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:21.676346 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:21.688928 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:21.688981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:21.757596 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:21.757620 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:21.757637 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:21.836301 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:21.836350 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:21.873553 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:21.873594 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.427756 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:24.440917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:24.440998 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:24.475902 1147424 cri.go:89] found id: ""
	I0731 21:33:24.475935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.475946 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:24.475954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:24.476031 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:24.509078 1147424 cri.go:89] found id: ""
	I0731 21:33:24.509115 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.509128 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:24.509136 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:24.509205 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:24.542466 1147424 cri.go:89] found id: ""
	I0731 21:33:24.542506 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.542518 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:24.542527 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:24.542589 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:24.579457 1147424 cri.go:89] found id: ""
	I0731 21:33:24.579496 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.579515 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:24.579524 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:24.579596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:24.623843 1147424 cri.go:89] found id: ""
	I0731 21:33:24.623880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.623891 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:24.623899 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:24.623971 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:24.661401 1147424 cri.go:89] found id: ""
	I0731 21:33:24.661437 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.661448 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:24.661457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:24.661526 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:24.694521 1147424 cri.go:89] found id: ""
	I0731 21:33:24.694551 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.694559 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:24.694567 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:24.694657 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:24.730530 1147424 cri.go:89] found id: ""
	I0731 21:33:24.730566 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.730578 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:24.730591 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:24.730607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.801836 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:24.801890 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:24.817753 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:24.817803 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:33:23.444464 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.942484 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:23.140282 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.638870 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	W0731 21:33:24.901125 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:24.901154 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:24.901170 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:24.984008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:24.984054 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:27.533575 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:27.546174 1147424 kubeadm.go:597] duration metric: took 4m1.98040234s to restartPrimaryControlPlane
	W0731 21:33:27.546264 1147424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:27.546291 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:28.848116 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.301779163s)
	I0731 21:33:28.848201 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:28.862706 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:28.872753 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:28.882437 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:28.882467 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:28.882527 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:28.892810 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:28.892893 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:28.901944 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:28.911008 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:28.911089 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:28.920446 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.929557 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:28.929627 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.939095 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:28.948405 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:28.948478 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:28.958084 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:29.033876 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:33:29.033969 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:29.180061 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:29.180208 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:29.180304 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:29.352063 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:29.354698 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:29.354847 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:29.354944 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:29.355065 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:29.355151 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:29.355244 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:29.355344 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:29.355454 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:29.355562 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:29.355675 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:29.355800 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:29.355855 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:29.355906 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:29.657622 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:29.951029 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:30.025514 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:30.502515 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:30.518575 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:30.520148 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:30.520332 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:30.670223 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:27.948560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.442457 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:28.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.139394 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.672807 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:33:30.672945 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:30.681152 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:30.682190 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:30.683416 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:30.688543 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:33:32.942316 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.443021 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:32.639784 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.139844 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.945781 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.442632 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.639625 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.139364 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.942420 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.942739 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.139763 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.639285 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:46.943777 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.442396 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:47.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.139244 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:51.139970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.946266 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.186759545s)
	I0731 21:33:53.946372 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:53.960849 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:53.971957 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:53.981956 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:53.981997 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:53.982061 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:53.991700 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:53.991794 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:54.001558 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:54.010863 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:54.010939 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:54.021132 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.032655 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:54.032745 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.042684 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:54.052522 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:54.052591 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:54.062401 1147232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:54.110034 1147232 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:33:54.110111 1147232 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:54.241728 1147232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:54.241910 1147232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:54.242057 1147232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:54.453017 1147232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:54.454705 1147232 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:54.454822 1147232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:54.459233 1147232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:54.459344 1147232 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:54.459427 1147232 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:54.459525 1147232 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:54.459612 1147232 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:54.459698 1147232 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:54.459807 1147232 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:54.459918 1147232 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:54.460026 1147232 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:54.460083 1147232 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:54.460190 1147232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:54.524149 1147232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:54.777800 1147232 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:33:54.921782 1147232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:55.044166 1147232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:55.204096 1147232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:55.204767 1147232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:55.207263 1147232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:51.442995 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.444424 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.944751 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.639209 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.639317 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.208851 1147232 out.go:204]   - Booting up control plane ...
	I0731 21:33:55.208977 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:55.209090 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:55.209331 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:55.229113 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:55.229800 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:55.229867 1147232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:55.356937 1147232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:33:55.357076 1147232 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:33:55.858979 1147232 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.083488ms
	I0731 21:33:55.859109 1147232 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:34:00.863345 1147232 kubeadm.go:310] [api-check] The API server is healthy after 5.002699171s
	I0731 21:34:00.879484 1147232 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:34:00.894019 1147232 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:34:00.928443 1147232 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:34:00.928739 1147232 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-563652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:34:00.941793 1147232 kubeadm.go:310] [bootstrap-token] Using token: zsizu4.9crnq3d9xqkkbhr5
	I0731 21:33:57.947020 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.442694 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:57.639666 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:59.640630 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:01.640684 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.943202 1147232 out.go:204]   - Configuring RBAC rules ...
	I0731 21:34:00.943358 1147232 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:34:00.951121 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:34:00.959955 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:34:00.963669 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:34:00.967795 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:34:00.972804 1147232 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:34:01.271139 1147232 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:34:01.705953 1147232 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:34:02.269466 1147232 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:34:02.271800 1147232 kubeadm.go:310] 
	I0731 21:34:02.271904 1147232 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:34:02.271915 1147232 kubeadm.go:310] 
	I0731 21:34:02.271994 1147232 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:34:02.272005 1147232 kubeadm.go:310] 
	I0731 21:34:02.272040 1147232 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:34:02.272127 1147232 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:34:02.272206 1147232 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:34:02.272212 1147232 kubeadm.go:310] 
	I0731 21:34:02.272290 1147232 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:34:02.272337 1147232 kubeadm.go:310] 
	I0731 21:34:02.272453 1147232 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:34:02.272477 1147232 kubeadm.go:310] 
	I0731 21:34:02.272557 1147232 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:34:02.272644 1147232 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:34:02.272735 1147232 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:34:02.272751 1147232 kubeadm.go:310] 
	I0731 21:34:02.272871 1147232 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:34:02.272972 1147232 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:34:02.272991 1147232 kubeadm.go:310] 
	I0731 21:34:02.273097 1147232 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273207 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:34:02.273252 1147232 kubeadm.go:310] 	--control-plane 
	I0731 21:34:02.273268 1147232 kubeadm.go:310] 
	I0731 21:34:02.273371 1147232 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:34:02.273381 1147232 kubeadm.go:310] 
	I0731 21:34:02.273492 1147232 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273643 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:34:02.274138 1147232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:34:02.274200 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:34:02.274221 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:34:02.275876 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:34:02.277208 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:34:02.287526 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:34:02.306070 1147232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:34:02.306192 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.306218 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-563652 minikube.k8s.io/updated_at=2024_07_31T21_34_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=embed-certs-563652 minikube.k8s.io/primary=true
	I0731 21:34:02.530554 1147232 ops.go:34] apiserver oom_adj: -16
	I0731 21:34:02.530710 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.031525 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.530812 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:04.030780 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.444274 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.443668 1148013 pod_ready.go:81] duration metric: took 4m0.00729593s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:04.443701 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:04.443712 1148013 pod_ready.go:38] duration metric: took 4m3.607055366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:04.443731 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:04.443795 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:04.443885 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:04.483174 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:04.483203 1148013 cri.go:89] found id: ""
	I0731 21:34:04.483212 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:04.483265 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.488570 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:04.488660 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:04.523705 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:04.523734 1148013 cri.go:89] found id: ""
	I0731 21:34:04.523745 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:04.523816 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.528231 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:04.528304 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:04.565303 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:04.565332 1148013 cri.go:89] found id: ""
	I0731 21:34:04.565341 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:04.565394 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.570089 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:04.570172 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:04.604648 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:04.604676 1148013 cri.go:89] found id: ""
	I0731 21:34:04.604686 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:04.604770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.609219 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:04.609306 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:04.644851 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.644876 1148013 cri.go:89] found id: ""
	I0731 21:34:04.644887 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:04.644954 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.649760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:04.649859 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:04.686438 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:04.686466 1148013 cri.go:89] found id: ""
	I0731 21:34:04.686477 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:04.686546 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.690707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:04.690791 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:04.726245 1148013 cri.go:89] found id: ""
	I0731 21:34:04.726276 1148013 logs.go:276] 0 containers: []
	W0731 21:34:04.726284 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:04.726291 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:04.726346 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:04.766009 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:04.766034 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.766038 1148013 cri.go:89] found id: ""
	I0731 21:34:04.766045 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:04.766105 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.770130 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.774449 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:04.774479 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.822626 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:04.822660 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.857618 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:04.857648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:04.908962 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:04.908993 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:04.962708 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:04.962759 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:04.977232 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:04.977271 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:05.109227 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:05.109264 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:05.163213 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:05.163250 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:05.200524 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:05.200564 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:05.242464 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:05.242501 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:05.278233 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:05.278263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:05.328930 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:05.328975 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:05.367827 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:05.367860 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:04.140237 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:06.641725 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.531795 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.030854 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.530821 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.031777 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.531171 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.030885 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.531555 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.031798 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.531512 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:09.031778 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.349628 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:08.364164 1148013 api_server.go:72] duration metric: took 4m15.266433533s to wait for apiserver process to appear ...
	I0731 21:34:08.364205 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:08.364257 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:08.364321 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:08.398165 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:08.398194 1148013 cri.go:89] found id: ""
	I0731 21:34:08.398205 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:08.398270 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.402707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:08.402780 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:08.444972 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:08.444998 1148013 cri.go:89] found id: ""
	I0731 21:34:08.445007 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:08.445067 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.449385 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:08.449458 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:08.487006 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:08.487040 1148013 cri.go:89] found id: ""
	I0731 21:34:08.487053 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:08.487123 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.491544 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:08.491618 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:08.526239 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:08.526271 1148013 cri.go:89] found id: ""
	I0731 21:34:08.526282 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:08.526334 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.530760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:08.530864 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:08.579799 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:08.579829 1148013 cri.go:89] found id: ""
	I0731 21:34:08.579844 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:08.579910 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.584172 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:08.584244 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:08.624614 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.624689 1148013 cri.go:89] found id: ""
	I0731 21:34:08.624703 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:08.624770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.629264 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:08.629340 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:08.669137 1148013 cri.go:89] found id: ""
	I0731 21:34:08.669170 1148013 logs.go:276] 0 containers: []
	W0731 21:34:08.669181 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:08.669189 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:08.669256 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:08.712145 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.712174 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:08.712179 1148013 cri.go:89] found id: ""
	I0731 21:34:08.712187 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:08.712246 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.717005 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.720992 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:08.721024 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.775824 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:08.775876 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.822904 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:08.822940 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:09.279585 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:09.279641 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:09.328597 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:09.328635 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:09.382901 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:09.382959 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:09.397461 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:09.397500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:09.437452 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:09.437494 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:09.472580 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:09.472614 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:09.512902 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:09.512938 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:09.558351 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:09.558394 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:09.669960 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:09.670001 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:09.714731 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:09.714770 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:09.140243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:11.639122 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:09.531101 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.031417 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.531369 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.031687 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.530902 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.030877 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.531359 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.030850 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.530829 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.030737 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.137727 1147232 kubeadm.go:1113] duration metric: took 11.831600904s to wait for elevateKubeSystemPrivileges
	I0731 21:34:14.137775 1147232 kubeadm.go:394] duration metric: took 5m10.826279216s to StartCluster
	I0731 21:34:14.137810 1147232 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.137941 1147232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:34:14.140680 1147232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.141051 1147232 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:34:14.141091 1147232 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:34:14.141199 1147232 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-563652"
	I0731 21:34:14.141240 1147232 addons.go:69] Setting default-storageclass=true in profile "embed-certs-563652"
	I0731 21:34:14.141263 1147232 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-563652"
	W0731 21:34:14.141272 1147232 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:34:14.141291 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:34:14.141302 1147232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-563652"
	I0731 21:34:14.141309 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141337 1147232 addons.go:69] Setting metrics-server=true in profile "embed-certs-563652"
	I0731 21:34:14.141362 1147232 addons.go:234] Setting addon metrics-server=true in "embed-certs-563652"
	W0731 21:34:14.141373 1147232 addons.go:243] addon metrics-server should already be in state true
	I0731 21:34:14.141400 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141735 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141802 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141745 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141876 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141748 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.142070 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.143403 1147232 out.go:177] * Verifying Kubernetes components...
	I0731 21:34:14.144894 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:34:14.160359 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0731 21:34:14.160405 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0731 21:34:14.160631 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0731 21:34:14.160893 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.160996 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161048 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161478 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161497 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161643 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161657 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161721 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161749 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.162028 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162069 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162029 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162250 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.162557 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162596 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.162654 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162675 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.166106 1147232 addons.go:234] Setting addon default-storageclass=true in "embed-certs-563652"
	W0731 21:34:14.166129 1147232 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:34:14.166153 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.166426 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.166463 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.179941 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I0731 21:34:14.180522 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.181056 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.181077 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.181522 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.181726 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.182994 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0731 21:34:14.183599 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.183753 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.183958 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0731 21:34:14.184127 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.184145 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.184538 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.184645 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185028 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.185047 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.185306 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.185343 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.185458 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185527 1147232 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:34:14.185650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.186884 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:34:14.186912 1147232 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:34:14.186937 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.187442 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.189035 1147232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:34:14.190019 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190617 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.190644 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190680 1147232 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.190700 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:34:14.190725 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.191369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.191607 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.191893 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.192265 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.194023 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194383 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.194407 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.194852 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.195073 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.195233 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.207044 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0731 21:34:14.207748 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.208292 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.208319 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.208759 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.208962 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.210554 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.210881 1147232 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.210902 1147232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:34:14.210925 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.214212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214803 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.215026 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214918 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.216141 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.216369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.216583 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.360826 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:34:14.379220 1147232 node_ready.go:35] waiting up to 6m0s for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387294 1147232 node_ready.go:49] node "embed-certs-563652" has status "Ready":"True"
	I0731 21:34:14.387331 1147232 node_ready.go:38] duration metric: took 8.073597ms for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387344 1147232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.392589 1147232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400252 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.400276 1147232 pod_ready.go:81] duration metric: took 7.654503ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400285 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405540 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.405564 1147232 pod_ready.go:81] duration metric: took 5.273822ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405573 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410097 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.410118 1147232 pod_ready.go:81] duration metric: took 4.539492ms for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410127 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414070 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.414094 1147232 pod_ready.go:81] duration metric: took 3.961128ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414101 1147232 pod_ready.go:38] duration metric: took 26.744925ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.414117 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:14.414166 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:14.427922 1147232 api_server.go:72] duration metric: took 286.820645ms to wait for apiserver process to appear ...
	I0731 21:34:14.427955 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:14.427976 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:34:14.433697 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:34:14.435062 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:14.435088 1147232 api_server.go:131] duration metric: took 7.125728ms to wait for apiserver health ...
	I0731 21:34:14.435096 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:10.689650 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:34:10.690301 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:10.690529 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:14.497864 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.523526 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:34:14.523560 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:34:14.523656 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.552390 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:34:14.552424 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:34:14.586389 1147232 system_pods.go:59] 4 kube-system pods found
	I0731 21:34:14.586421 1147232 system_pods.go:61] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.586426 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.586429 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.586433 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.586440 1147232 system_pods.go:74] duration metric: took 151.337561ms to wait for pod list to return data ...
	I0731 21:34:14.586448 1147232 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:14.613255 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.613292 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:34:14.677966 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.728484 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.728522 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.728906 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.728971 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.728992 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729005 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.729016 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.729280 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.729300 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729315 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736315 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.736340 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.736605 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736611 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.736628 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.783127 1147232 default_sa.go:45] found service account: "default"
	I0731 21:34:14.783169 1147232 default_sa.go:55] duration metric: took 196.713133ms for default service account to be created ...
	I0731 21:34:14.783181 1147232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:14.998421 1147232 system_pods.go:86] 5 kube-system pods found
	I0731 21:34:14.998459 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.998467 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.998476 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.998483 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending
	I0731 21:34:14.998488 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.998528 1147232 retry.go:31] will retry after 204.720881ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.239227 1147232 system_pods.go:86] 7 kube-system pods found
	I0731 21:34:15.239260 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending
	I0731 21:34:15.239268 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.239275 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.239281 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.239285 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.239291 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.239295 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.239316 1147232 retry.go:31] will retry after 274.032375ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.470562 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.470596 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.470970 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471046 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471059 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.471070 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.471082 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.471345 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471384 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471395 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.530409 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.530454 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530467 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530475 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.530483 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.530493 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.530501 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.530510 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.530540 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.530554 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.530575 1147232 retry.go:31] will retry after 306.456007ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.572796 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.572829 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573170 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573210 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573232 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.573245 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573542 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.573591 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573612 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573631 1147232 addons.go:475] Verifying addon metrics-server=true in "embed-certs-563652"
	I0731 21:34:15.576124 1147232 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0731 21:34:12.254258 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:34:12.259093 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:34:12.260261 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:12.260290 1148013 api_server.go:131] duration metric: took 3.896077544s to wait for apiserver health ...
	I0731 21:34:12.260299 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:12.260325 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:12.260383 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:12.302317 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.302350 1148013 cri.go:89] found id: ""
	I0731 21:34:12.302361 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:12.302435 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.306733 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:12.306821 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:12.342694 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:12.342719 1148013 cri.go:89] found id: ""
	I0731 21:34:12.342728 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:12.342788 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.346762 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:12.346848 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:12.382747 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.382772 1148013 cri.go:89] found id: ""
	I0731 21:34:12.382782 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:12.382851 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.386891 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:12.386988 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:12.424735 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.424768 1148013 cri.go:89] found id: ""
	I0731 21:34:12.424777 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:12.424842 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.430109 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:12.430193 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:12.466432 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.466457 1148013 cri.go:89] found id: ""
	I0731 21:34:12.466464 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:12.466520 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.470677 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:12.470761 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:12.509821 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:12.509847 1148013 cri.go:89] found id: ""
	I0731 21:34:12.509858 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:12.509926 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.514114 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:12.514199 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:12.560780 1148013 cri.go:89] found id: ""
	I0731 21:34:12.560810 1148013 logs.go:276] 0 containers: []
	W0731 21:34:12.560831 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:12.560841 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:12.560911 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:12.611528 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:12.611560 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.611566 1148013 cri.go:89] found id: ""
	I0731 21:34:12.611575 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:12.611643 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.615972 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.620046 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:12.620072 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:12.733715 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:12.733761 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.785864 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:12.785915 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.829467 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:12.829510 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.867566 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:12.867599 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.908038 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:12.908073 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.945425 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:12.945471 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:12.994911 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:12.994948 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:13.061451 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:13.061500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:13.107896 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:13.107947 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:13.164585 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:13.164627 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:13.206615 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:13.206648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:13.587405 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:13.587453 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:16.108951 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:16.108985 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.108990 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.108994 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.108998 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.109001 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.109004 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.109010 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.109015 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.109023 1148013 system_pods.go:74] duration metric: took 3.848717497s to wait for pod list to return data ...
	I0731 21:34:16.109031 1148013 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:16.112076 1148013 default_sa.go:45] found service account: "default"
	I0731 21:34:16.112124 1148013 default_sa.go:55] duration metric: took 3.083038ms for default service account to be created ...
	I0731 21:34:16.112135 1148013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:16.118191 1148013 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:16.118232 1148013 system_pods.go:89] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.118242 1148013 system_pods.go:89] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.118250 1148013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.118256 1148013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.118263 1148013 system_pods.go:89] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.118269 1148013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.118303 1148013 system_pods.go:89] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.118321 1148013 system_pods.go:89] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.118333 1148013 system_pods.go:126] duration metric: took 6.190349ms to wait for k8s-apps to be running ...
	I0731 21:34:16.118344 1148013 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.118404 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.137723 1148013 system_svc.go:56] duration metric: took 19.365234ms WaitForService to wait for kubelet
	I0731 21:34:16.137753 1148013 kubeadm.go:582] duration metric: took 4m23.040028763s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.137781 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.141708 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.141737 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.141805 1148013 node_conditions.go:105] duration metric: took 4.017229ms to run NodePressure ...
	I0731 21:34:16.141831 1148013 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.141849 1148013 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.141868 1148013 start.go:255] writing updated cluster config ...
	I0731 21:34:16.142163 1148013 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.203520 1148013 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.205072 1148013 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-755535" cluster and "default" namespace by default
	I0731 21:34:13.639431 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.640300 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.577285 1147232 addons.go:510] duration metric: took 1.436190545s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0731 21:34:15.848446 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.848480 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848487 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848496 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.848502 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.848507 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.848512 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.848516 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.848522 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.848527 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.848545 1147232 retry.go:31] will retry after 538.9255ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.397869 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.397924 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397937 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397946 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.397954 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.397962 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.397972 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:16.397979 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.397989 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.398003 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:16.398152 1147232 retry.go:31] will retry after 511.77725ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.917181 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.917219 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Running
	I0731 21:34:16.917228 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Running
	I0731 21:34:16.917234 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.917240 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.917248 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.917256 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Running
	I0731 21:34:16.917261 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.917272 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.917279 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Running
	I0731 21:34:16.917295 1147232 system_pods.go:126] duration metric: took 2.134102549s to wait for k8s-apps to be running ...
	I0731 21:34:16.917310 1147232 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.917365 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.932647 1147232 system_svc.go:56] duration metric: took 15.322111ms WaitForService to wait for kubelet
	I0731 21:34:16.932702 1147232 kubeadm.go:582] duration metric: took 2.791596331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.932730 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.935567 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.935589 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.935600 1147232 node_conditions.go:105] duration metric: took 2.864432ms to run NodePressure ...
	I0731 21:34:16.935614 1147232 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.935621 1147232 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.935631 1147232 start.go:255] writing updated cluster config ...
	I0731 21:34:16.935948 1147232 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.990670 1147232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.992682 1147232 out.go:177] * Done! kubectl is now configured to use "embed-certs-563652" cluster and "default" namespace by default
	I0731 21:34:15.690878 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:15.691156 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:18.139818 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:20.639113 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:23.140314 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.641086 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.691455 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:25.691639 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:28.139044 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:30.140499 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:32.640931 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:35.139207 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:36.640291 1146656 pod_ready.go:81] duration metric: took 4m0.007535985s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:36.640323 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:36.640334 1146656 pod_ready.go:38] duration metric: took 4m7.419160814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:36.640354 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:36.640393 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:36.640454 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:36.688629 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:36.688658 1146656 cri.go:89] found id: ""
	I0731 21:34:36.688668 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:36.688747 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.693261 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:36.693349 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:36.730997 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:36.731021 1146656 cri.go:89] found id: ""
	I0731 21:34:36.731028 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:36.731079 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.737624 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:36.737692 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:36.780734 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:36.780758 1146656 cri.go:89] found id: ""
	I0731 21:34:36.780769 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:36.780831 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.784767 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:36.784839 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:36.824129 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:36.824164 1146656 cri.go:89] found id: ""
	I0731 21:34:36.824174 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:36.824246 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.828299 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:36.828380 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:36.863976 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:36.864008 1146656 cri.go:89] found id: ""
	I0731 21:34:36.864017 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:36.864081 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.868516 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:36.868594 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:36.903106 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:36.903137 1146656 cri.go:89] found id: ""
	I0731 21:34:36.903148 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:36.903212 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.907260 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:36.907327 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:36.943921 1146656 cri.go:89] found id: ""
	I0731 21:34:36.943955 1146656 logs.go:276] 0 containers: []
	W0731 21:34:36.943963 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:36.943969 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:36.944025 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:36.979295 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:36.979327 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:36.979334 1146656 cri.go:89] found id: ""
	I0731 21:34:36.979345 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:36.979403 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.984464 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.988471 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:36.988511 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:37.121952 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:37.121995 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:37.169494 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:37.169546 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:37.205544 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:37.205577 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:37.255892 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:37.255930 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:37.292002 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:37.292036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:37.327852 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:37.327881 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:37.367753 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:37.367795 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:37.419399 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:37.419443 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:37.432894 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:37.432938 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:37.474408 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:37.474454 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:37.508203 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:37.508246 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:37.550030 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:37.550072 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:40.551728 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:40.566959 1146656 api_server.go:72] duration metric: took 4m19.080511832s to wait for apiserver process to appear ...
	I0731 21:34:40.567027 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:40.567085 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:40.567153 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:40.617492 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:40.617529 1146656 cri.go:89] found id: ""
	I0731 21:34:40.617539 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:40.617605 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.621950 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:40.622023 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:40.664964 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:40.664990 1146656 cri.go:89] found id: ""
	I0731 21:34:40.664998 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:40.665052 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.669257 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:40.669353 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:40.705806 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:40.705842 1146656 cri.go:89] found id: ""
	I0731 21:34:40.705854 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:40.705920 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.710069 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:40.710146 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:40.746331 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:40.746358 1146656 cri.go:89] found id: ""
	I0731 21:34:40.746368 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:40.746420 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.754270 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:40.754364 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:40.791320 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:40.791356 1146656 cri.go:89] found id: ""
	I0731 21:34:40.791367 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:40.791435 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.795691 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:40.795773 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:40.835548 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:40.835578 1146656 cri.go:89] found id: ""
	I0731 21:34:40.835589 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:40.835652 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.839854 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:40.839939 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:40.874322 1146656 cri.go:89] found id: ""
	I0731 21:34:40.874358 1146656 logs.go:276] 0 containers: []
	W0731 21:34:40.874369 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:40.874379 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:40.874448 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:40.922665 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:40.922691 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.922695 1146656 cri.go:89] found id: ""
	I0731 21:34:40.922703 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:40.922762 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.926750 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.930612 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:40.930640 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.966656 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:40.966695 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:41.401560 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:41.401622 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:41.503991 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:41.504036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:41.552765 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:41.552816 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:41.588315 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:41.588353 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:41.639790 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:41.639832 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:41.679851 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:41.679891 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:41.716182 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:41.716219 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:41.762445 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:41.762493 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:41.815762 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:41.815810 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:41.829753 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:41.829794 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:41.874703 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:41.874745 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.415559 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:34:44.420498 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:34:44.421648 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:34:44.421678 1146656 api_server.go:131] duration metric: took 3.854640091s to wait for apiserver health ...
	I0731 21:34:44.421690 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:44.421724 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:44.421786 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:44.456716 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.456744 1146656 cri.go:89] found id: ""
	I0731 21:34:44.456755 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:44.456809 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.460762 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:44.460836 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:44.498325 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.498352 1146656 cri.go:89] found id: ""
	I0731 21:34:44.498361 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:44.498416 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.502344 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:44.502424 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:44.538766 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.538799 1146656 cri.go:89] found id: ""
	I0731 21:34:44.538809 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:44.538874 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.542853 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:44.542946 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:44.578142 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:44.578175 1146656 cri.go:89] found id: ""
	I0731 21:34:44.578185 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:44.578241 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.582494 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:44.582574 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:44.631110 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:44.631141 1146656 cri.go:89] found id: ""
	I0731 21:34:44.631149 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:44.631208 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.635618 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:44.635693 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:44.669607 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:44.669633 1146656 cri.go:89] found id: ""
	I0731 21:34:44.669643 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:44.669702 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.673967 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:44.674052 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:44.723388 1146656 cri.go:89] found id: ""
	I0731 21:34:44.723417 1146656 logs.go:276] 0 containers: []
	W0731 21:34:44.723426 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:44.723433 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:44.723485 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:44.759398 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:44.759423 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:44.759429 1146656 cri.go:89] found id: ""
	I0731 21:34:44.759438 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:44.759506 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.765787 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.769602 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:44.769627 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:44.783608 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:44.783646 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:44.897376 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:44.897415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.941518 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:44.941558 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.976285 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:44.976319 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:45.015310 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:45.015343 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:45.076253 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:45.076298 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:45.114621 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:45.114656 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:45.171369 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:45.171415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:45.219450 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:45.219492 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:45.254864 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:45.254901 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:45.289962 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:45.289999 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:45.660050 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:45.660113 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:48.211383 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:48.211418 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.211423 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.211427 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.211431 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.211435 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.211440 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.211449 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.211456 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.211467 1146656 system_pods.go:74] duration metric: took 3.789769058s to wait for pod list to return data ...
	I0731 21:34:48.211490 1146656 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:48.214462 1146656 default_sa.go:45] found service account: "default"
	I0731 21:34:48.214492 1146656 default_sa.go:55] duration metric: took 2.992385ms for default service account to be created ...
	I0731 21:34:48.214501 1146656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:48.220257 1146656 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:48.220289 1146656 system_pods.go:89] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.220295 1146656 system_pods.go:89] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.220299 1146656 system_pods.go:89] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.220304 1146656 system_pods.go:89] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.220309 1146656 system_pods.go:89] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.220313 1146656 system_pods.go:89] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.220322 1146656 system_pods.go:89] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.220328 1146656 system_pods.go:89] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.220339 1146656 system_pods.go:126] duration metric: took 5.831037ms to wait for k8s-apps to be running ...
	I0731 21:34:48.220352 1146656 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:48.220404 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:48.235707 1146656 system_svc.go:56] duration metric: took 15.341391ms WaitForService to wait for kubelet
	I0731 21:34:48.235747 1146656 kubeadm.go:582] duration metric: took 4m26.749308267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:48.235769 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:48.239352 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:48.239377 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:48.239388 1146656 node_conditions.go:105] duration metric: took 3.614275ms to run NodePressure ...
	I0731 21:34:48.239400 1146656 start.go:241] waiting for startup goroutines ...
	I0731 21:34:48.239407 1146656 start.go:246] waiting for cluster config update ...
	I0731 21:34:48.239418 1146656 start.go:255] writing updated cluster config ...
	I0731 21:34:48.239724 1146656 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:48.291567 1146656 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:34:48.293377 1146656 out.go:177] * Done! kubectl is now configured to use "no-preload-018891" cluster and "default" namespace by default
	I0731 21:34:45.692895 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:45.693194 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695071 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:35:25.695336 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695369 1147424 kubeadm.go:310] 
	I0731 21:35:25.695432 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:35:25.695496 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:35:25.695506 1147424 kubeadm.go:310] 
	I0731 21:35:25.695560 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:35:25.695606 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:35:25.695752 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:35:25.695775 1147424 kubeadm.go:310] 
	I0731 21:35:25.695866 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:35:25.695914 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:35:25.695965 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:35:25.695972 1147424 kubeadm.go:310] 
	I0731 21:35:25.696064 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:35:25.696197 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:35:25.696218 1147424 kubeadm.go:310] 
	I0731 21:35:25.696389 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:35:25.696510 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:35:25.696637 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:35:25.696739 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:35:25.696761 1147424 kubeadm.go:310] 
	I0731 21:35:25.697342 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:35:25.697447 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:35:25.697582 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:35:25.697782 1147424 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:35:25.697852 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:35:31.094319 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.396429611s)
	I0731 21:35:31.094410 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:35:31.109019 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:35:31.118415 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:35:31.118447 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:35:31.118512 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:35:31.129005 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:35:31.129097 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:35:31.139701 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:35:31.149483 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:35:31.149565 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:35:31.158699 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.168151 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:35:31.168225 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.177911 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:35:31.186739 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:35:31.186821 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:35:31.196779 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:35:31.410613 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:37:27.101986 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:37:27.102135 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:37:27.103680 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:37:27.103742 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:37:27.103874 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:37:27.103971 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:37:27.104056 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:37:27.104135 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:37:27.105757 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:37:27.105851 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:37:27.105911 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:37:27.105982 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:37:27.106047 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:37:27.106126 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:37:27.106185 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:37:27.106256 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:37:27.106340 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:37:27.106446 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:37:27.106527 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:37:27.106582 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:37:27.106669 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:37:27.106747 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:37:27.106800 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:37:27.106853 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:37:27.106928 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:37:27.107053 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:37:27.107169 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:37:27.107233 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:37:27.107307 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:37:27.108810 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:37:27.108897 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:37:27.108964 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:37:27.109022 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:37:27.109090 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:37:27.109227 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:37:27.109276 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:37:27.109346 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109569 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109655 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109876 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109947 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110108 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110172 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110334 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110393 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110549 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110556 1147424 kubeadm.go:310] 
	I0731 21:37:27.110589 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:37:27.110626 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:37:27.110632 1147424 kubeadm.go:310] 
	I0731 21:37:27.110661 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:37:27.110707 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:37:27.110804 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:37:27.110816 1147424 kubeadm.go:310] 
	I0731 21:37:27.110920 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:37:27.110965 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:37:27.110999 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:37:27.111006 1147424 kubeadm.go:310] 
	I0731 21:37:27.111099 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:37:27.111173 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:37:27.111181 1147424 kubeadm.go:310] 
	I0731 21:37:27.111284 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:37:27.111357 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:37:27.111421 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:37:27.111501 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:37:27.111545 1147424 kubeadm.go:310] 
	I0731 21:37:27.111591 1147424 kubeadm.go:394] duration metric: took 8m1.593977042s to StartCluster
	I0731 21:37:27.111642 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:37:27.111732 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:37:27.151036 1147424 cri.go:89] found id: ""
	I0731 21:37:27.151080 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.151092 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:37:27.151101 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:37:27.151164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:37:27.189839 1147424 cri.go:89] found id: ""
	I0731 21:37:27.189877 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.189897 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:37:27.189906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:37:27.189975 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:37:27.224515 1147424 cri.go:89] found id: ""
	I0731 21:37:27.224553 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.224566 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:37:27.224574 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:37:27.224637 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:37:27.256890 1147424 cri.go:89] found id: ""
	I0731 21:37:27.256927 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.256939 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:37:27.256948 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:37:27.257017 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:37:27.292320 1147424 cri.go:89] found id: ""
	I0731 21:37:27.292360 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.292373 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:37:27.292380 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:37:27.292448 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:37:27.327537 1147424 cri.go:89] found id: ""
	I0731 21:37:27.327580 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.327591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:37:27.327600 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:37:27.327669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:37:27.362489 1147424 cri.go:89] found id: ""
	I0731 21:37:27.362522 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.362533 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:37:27.362541 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:37:27.362612 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:37:27.398531 1147424 cri.go:89] found id: ""
	I0731 21:37:27.398575 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.398587 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:37:27.398605 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:37:27.398625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:37:27.412082 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:37:27.412129 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:37:27.485574 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:37:27.485598 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:37:27.485615 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:37:27.602979 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:37:27.603026 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:37:27.642075 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:37:27.642108 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 21:37:27.692811 1147424 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:37:27.692868 1147424 out.go:239] * 
	W0731 21:37:27.692944 1147424 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.692968 1147424 out.go:239] * 
	W0731 21:37:27.693763 1147424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:37:27.697049 1147424 out.go:177] 
	W0731 21:37:27.698454 1147424 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.698525 1147424 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:37:27.698564 1147424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:37:27.700008 1147424 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.633865739Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&PodSandboxMetadata{Name:busybox,Uid:873ec90f-0bdc-41a1-be49-45116eb0ccab,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461398831609374,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:29:50.875998006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-t9v4z,Uid:2b2a16bc-571e-4d00-b12a-f50dc462f48f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:172246
1398723214495,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:29:50.875999197Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45c8d2ce1fe404d951facae5c5eb5dc105ed867873334817fb1c4223205c888d,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-968kv,Uid:c144d022-c820-43eb-bed1-80f2dca27ac0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461396926606347,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-968kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c144d022-c820-43eb-bed1-80f2dca27ac0,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31
T21:29:50.875993730Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&PodSandboxMetadata{Name:kube-proxy-mqcmt,Uid:476ef297-b803-4125-980a-dc5501361d71,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461391189907934,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b803-4125-980a-dc5501361d71,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:29:50.876002871Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:98ff2805-3db9-4c39-9a70-77073d33e3bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461391188724322,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-07-31T21:29:50.875995398Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-755535,Uid:3b38b6fd59462082d65a70ef38d1260f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461386360843333,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d1260f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.145:8444,kubernetes.io/config.hash: 3b38b6fd59462082d65a70ef38d1260f,kubernetes.io/config.seen: 2024-07-31T21:29:45.869445891Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d
74,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-755535,Uid:d6c7970ae2afdf9f14e0079e6f9c4666,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461386358088961,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c7970ae2afdf9f14e0079e6f9c4666,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d6c7970ae2afdf9f14e0079e6f9c4666,kubernetes.io/config.seen: 2024-07-31T21:29:45.869446872Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-755535,Uid:8fde881bd185e21fa8b63992d5565a66,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461386347994900,Labels:map[string]string{component: etcd,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d5565a66,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.145:2379,kubernetes.io/config.hash: 8fde881bd185e21fa8b63992d5565a66,kubernetes.io/config.seen: 2024-07-31T21:29:45.898859692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-755535,Uid:25920a19635748b7933f5f3169669c05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461386335586105,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5f3169669c05,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 25920a19635748b7933f5f3169669c05,kubernetes.io/config.seen: 2024-07-31T21:29:45.869441945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ac783a3a-fa05-4202-9427-c97600b138d8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.634434659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1581557b-65cd-4223-906b-9283b6c87b5b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.634495660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1581557b-65cd-4223-906b-9283b6c87b5b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.634737282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e6f4f2d56f3dae658f474871e27e3492d0ee93b9b2ee9da997ae1c01ff4f49e,PodSandboxId:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461401217175326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{io.kubernetes.container.hash: 57754f62,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999,PodSandboxId:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461398956608200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,},Annotations:map[string]string{io.kubernetes.container.hash: 6fadb29c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461391985210613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d,PodSandboxId:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722461391335491198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b
803-4125-980a-dc5501361d71,},Annotations:map[string]string{io.kubernetes.container.hash: 795c817e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461391303081367,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70
-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d,PodSandboxId:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461386567335023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5
f3169669c05,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82,PodSandboxId:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461386559042291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d6c7970ae2afdf9f14e0079e6f9c4666,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a,PodSandboxId:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461386560050639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d556
5a66,},Annotations:map[string]string{io.kubernetes.container.hash: bd69097d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329,PodSandboxId:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461386504341397,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d126
0f,},Annotations:map[string]string{io.kubernetes.container.hash: f7947ae5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1581557b-65cd-4223-906b-9283b6c87b5b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.641937345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6a9d0ff-d065-4b03-9cbd-dca990dc188e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.642266876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6a9d0ff-d065-4b03-9cbd-dca990dc188e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.644166477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a841938a-924d-4d57-9354-30906fc98807 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.644693828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462198644638562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a841938a-924d-4d57-9354-30906fc98807 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.645751550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9a2f595-9197-4082-84e5-02604cd83609 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.645918782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9a2f595-9197-4082-84e5-02604cd83609 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.646358509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e6f4f2d56f3dae658f474871e27e3492d0ee93b9b2ee9da997ae1c01ff4f49e,PodSandboxId:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461401217175326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{io.kubernetes.container.hash: 57754f62,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999,PodSandboxId:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461398956608200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,},Annotations:map[string]string{io.kubernetes.container.hash: 6fadb29c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461391985210613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d,PodSandboxId:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722461391335491198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b
803-4125-980a-dc5501361d71,},Annotations:map[string]string{io.kubernetes.container.hash: 795c817e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461391303081367,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70
-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d,PodSandboxId:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461386567335023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5
f3169669c05,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82,PodSandboxId:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461386559042291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d6c7970ae2afdf9f14e0079e6f9c4666,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a,PodSandboxId:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461386560050639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d556
5a66,},Annotations:map[string]string{io.kubernetes.container.hash: bd69097d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329,PodSandboxId:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461386504341397,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d126
0f,},Annotations:map[string]string{io.kubernetes.container.hash: f7947ae5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9a2f595-9197-4082-84e5-02604cd83609 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.700612031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=450c0133-6799-44ff-8bf2-0942f759df33 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.700805184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=450c0133-6799-44ff-8bf2-0942f759df33 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.702231149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=580ae2bb-64b4-49bc-8b90-a3eb543e59a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.703526649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462198703393333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=580ae2bb-64b4-49bc-8b90-a3eb543e59a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.704364496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a0d274f-941e-429f-8448-e74c5ea7e57c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.704471434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a0d274f-941e-429f-8448-e74c5ea7e57c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.705114088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e6f4f2d56f3dae658f474871e27e3492d0ee93b9b2ee9da997ae1c01ff4f49e,PodSandboxId:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461401217175326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{io.kubernetes.container.hash: 57754f62,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999,PodSandboxId:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461398956608200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,},Annotations:map[string]string{io.kubernetes.container.hash: 6fadb29c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461391985210613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d,PodSandboxId:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722461391335491198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b
803-4125-980a-dc5501361d71,},Annotations:map[string]string{io.kubernetes.container.hash: 795c817e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461391303081367,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70
-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d,PodSandboxId:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461386567335023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5
f3169669c05,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82,PodSandboxId:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461386559042291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d6c7970ae2afdf9f14e0079e6f9c4666,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a,PodSandboxId:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461386560050639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d556
5a66,},Annotations:map[string]string{io.kubernetes.container.hash: bd69097d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329,PodSandboxId:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461386504341397,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d126
0f,},Annotations:map[string]string{io.kubernetes.container.hash: f7947ae5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a0d274f-941e-429f-8448-e74c5ea7e57c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.751804083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0ae2639-129b-4e5e-b86f-ef18c19dfc5e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.751888176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0ae2639-129b-4e5e-b86f-ef18c19dfc5e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.753564230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78d6212f-5299-4d88-ac45-856c031d3b8a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.754201620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462198754165254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78d6212f-5299-4d88-ac45-856c031d3b8a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.754977932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d451100-d048-4e04-91c6-dc621094675b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.755056014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d451100-d048-4e04-91c6-dc621094675b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:18 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:43:18.755350591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e6f4f2d56f3dae658f474871e27e3492d0ee93b9b2ee9da997ae1c01ff4f49e,PodSandboxId:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461401217175326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{io.kubernetes.container.hash: 57754f62,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999,PodSandboxId:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461398956608200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,},Annotations:map[string]string{io.kubernetes.container.hash: 6fadb29c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461391985210613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d,PodSandboxId:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722461391335491198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b
803-4125-980a-dc5501361d71,},Annotations:map[string]string{io.kubernetes.container.hash: 795c817e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461391303081367,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70
-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d,PodSandboxId:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461386567335023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5
f3169669c05,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82,PodSandboxId:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461386559042291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d6c7970ae2afdf9f14e0079e6f9c4666,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a,PodSandboxId:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461386560050639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d556
5a66,},Annotations:map[string]string{io.kubernetes.container.hash: bd69097d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329,PodSandboxId:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461386504341397,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d126
0f,},Annotations:map[string]string{io.kubernetes.container.hash: f7947ae5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d451100-d048-4e04-91c6-dc621094675b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1e6f4f2d56f3d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   79c941e0df22b       busybox
	bcb32c8ad4c0b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   b248b79002e1e       coredns-7db6d8ff4d-t9v4z
	d88829a348f0a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   ebf4bbfa181ae       storage-provisioner
	09a74d133e024       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   f834d6d69eecf       kube-proxy-mqcmt
	f7bd90ab6a69f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   ebf4bbfa181ae       storage-provisioner
	4c93a360c730d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   dfb00b692ae1e       kube-scheduler-default-k8s-diff-port-755535
	4cc8ee4ac01a6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   151c367111654       etcd-default-k8s-diff-port-755535
	cc7cd56cee77f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   b248e01209ed3       kube-controller-manager-default-k8s-diff-port-755535
	147ee230f5cd2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   81db95a009255       kube-apiserver-default-k8s-diff-port-755535
	
	
	==> coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39377 - 13287 "HINFO IN 5308087396783994287.1259092129968555909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023772523s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-755535
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-755535
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=default-k8s-diff-port-755535
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_24_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:24:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-755535
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:43:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:40:32 +0000   Wed, 31 Jul 2024 21:24:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:40:32 +0000   Wed, 31 Jul 2024 21:24:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:40:32 +0000   Wed, 31 Jul 2024 21:24:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:40:32 +0000   Wed, 31 Jul 2024 21:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    default-k8s-diff-port-755535
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ccb94d8906748b98a1ce78ffba483b6
	  System UUID:                0ccb94d8-9067-48b9-8a1c-e78ffba483b6
	  Boot ID:                    fa0b0819-13dd-4372-ada4-4524a3fff1a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-t9v4z                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-default-k8s-diff-port-755535                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-755535             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-755535    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-mqcmt                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-755535             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 metrics-server-569cc877fc-968kv                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-755535 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-755535 event: Registered Node default-k8s-diff-port-755535 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-755535 event: Registered Node default-k8s-diff-port-755535 in Controller
	
	
	==> dmesg <==
	[Jul31 21:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048270] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037647] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.035729] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.959925] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.272038] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.074648] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053590] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.195633] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.162145] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.296834] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.459158] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.065379] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.950249] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +5.595615] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.913988] systemd-fstab-generator[1597]: Ignoring "noauto" option for root device
	[  +3.766726] kauditd_printk_skb: 67 callbacks suppressed
	[Jul31 21:30] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] <==
	{"level":"info","ts":"2024-07-31T21:29:46.954075Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"44b3a0f32f80bb09","initial-advertise-peer-urls":["https://192.168.39.145:2380"],"listen-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.145:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:29:46.954154Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:29:46.9527Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-07-31T21:29:46.954227Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-07-31T21:29:46.954591Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","added-peer-id":"44b3a0f32f80bb09","added-peer-peer-urls":["https://192.168.39.145:2380"]}
	{"level":"info","ts":"2024-07-31T21:29:46.956728Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:29:46.956954Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:29:48.595147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:29:48.59523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:29:48.595287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 2"}
	{"level":"info","ts":"2024-07-31T21:29:48.595307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:29:48.595315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-07-31T21:29:48.595328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:29:48.595338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-07-31T21:29:48.598517Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:default-k8s-diff-port-755535 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:29:48.599509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:29:48.5996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:29:48.599641Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:29:48.59952Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:29:48.602257Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
	{"level":"info","ts":"2024-07-31T21:29:48.602516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:30:04.141054Z","caller":"traceutil/trace.go:171","msg":"trace[151961930] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"155.403671ms","start":"2024-07-31T21:30:03.985633Z","end":"2024-07-31T21:30:04.141037Z","steps":["trace[151961930] 'process raft request'  (duration: 155.286277ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:39:48.636222Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":833}
	{"level":"info","ts":"2024-07-31T21:39:48.644762Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":833,"took":"8.193884ms","hash":3978100767,"current-db-size-bytes":2584576,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2584576,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-31T21:39:48.6449Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3978100767,"revision":833,"compact-revision":-1}
	
	
	==> kernel <==
	 21:43:19 up 13 min,  0 users,  load average: 0.16, 0.18, 0.11
	Linux default-k8s-diff-port-755535 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] <==
	I0731 21:37:50.969804       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:39:49.971900       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:39:49.972096       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 21:39:50.972979       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:39:50.973082       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:39:50.973110       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:39:50.973160       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:39:50.973231       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:39:50.974373       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:40:50.973625       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:40:50.973781       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:40:50.973821       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:40:50.974872       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:40:50.974962       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:40:50.974971       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:42:50.974872       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:42:50.974928       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:42:50.974937       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:42:50.976099       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:42:50.976170       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:42:50.976176       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] <==
	I0731 21:37:33.103235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:38:02.655080       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:38:03.110613       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:38:32.659592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:38:33.119388       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:39:02.664616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:39:03.126764       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:39:32.669348       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:39:33.134506       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:40:02.674311       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:40:03.142055       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:40:32.679911       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:40:33.148882       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:41:01.934926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="107.413µs"
	E0731 21:41:02.684080       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:41:03.156337       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:41:12.930925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="49.254µs"
	E0731 21:41:32.688766       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:41:33.163739       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:42:02.693177       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:42:03.169901       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:42:32.697983       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:42:33.177576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:43:02.702285       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:43:03.186414       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] <==
	I0731 21:29:51.501362       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:29:51.512418       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	I0731 21:29:51.562418       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:29:51.564013       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:29:51.564097       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:29:51.568735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:29:51.569034       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:29:51.569083       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:29:51.573209       1 config.go:319] "Starting node config controller"
	I0731 21:29:51.573258       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:29:51.573998       1 config.go:192] "Starting service config controller"
	I0731 21:29:51.574072       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:29:51.574156       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:29:51.574192       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:29:51.673806       1 shared_informer.go:320] Caches are synced for node config
	I0731 21:29:51.674777       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:29:51.677772       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] <==
	I0731 21:29:47.600747       1 serving.go:380] Generated self-signed cert in-memory
	I0731 21:29:50.004743       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:29:50.004775       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:29:50.015322       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:29:50.015403       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0731 21:29:50.015411       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0731 21:29:50.015424       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:29:50.025140       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:29:50.025178       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:29:50.025196       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0731 21:29:50.025201       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 21:29:50.115617       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0731 21:29:50.126183       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:29:50.126184       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:40:49 default-k8s-diff-port-755535 kubelet[937]: E0731 21:40:49.929975     937 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 21:40:49 default-k8s-diff-port-755535 kubelet[937]: E0731 21:40:49.930050     937 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 21:40:49 default-k8s-diff-port-755535 kubelet[937]: E0731 21:40:49.930204     937 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnf6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-968kv_kube-system(c144d022-c820-43eb-bed1-80f2dca27ac0): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 31 21:40:49 default-k8s-diff-port-755535 kubelet[937]: E0731 21:40:49.930236     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:41:01 default-k8s-diff-port-755535 kubelet[937]: E0731 21:41:01.918428     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:41:12 default-k8s-diff-port-755535 kubelet[937]: E0731 21:41:12.918613     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:41:25 default-k8s-diff-port-755535 kubelet[937]: E0731 21:41:25.918990     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:41:40 default-k8s-diff-port-755535 kubelet[937]: E0731 21:41:40.918990     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:41:45 default-k8s-diff-port-755535 kubelet[937]: E0731 21:41:45.934494     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:41:45 default-k8s-diff-port-755535 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:41:45 default-k8s-diff-port-755535 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:41:45 default-k8s-diff-port-755535 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:41:45 default-k8s-diff-port-755535 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:41:53 default-k8s-diff-port-755535 kubelet[937]: E0731 21:41:53.918173     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:42:06 default-k8s-diff-port-755535 kubelet[937]: E0731 21:42:06.918641     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:42:17 default-k8s-diff-port-755535 kubelet[937]: E0731 21:42:17.918133     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:42:29 default-k8s-diff-port-755535 kubelet[937]: E0731 21:42:29.918262     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:42:44 default-k8s-diff-port-755535 kubelet[937]: E0731 21:42:44.917740     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:42:45 default-k8s-diff-port-755535 kubelet[937]: E0731 21:42:45.935857     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:42:45 default-k8s-diff-port-755535 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:42:45 default-k8s-diff-port-755535 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:42:45 default-k8s-diff-port-755535 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:42:45 default-k8s-diff-port-755535 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:42:59 default-k8s-diff-port-755535 kubelet[937]: E0731 21:42:59.919534     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:43:10 default-k8s-diff-port-755535 kubelet[937]: E0731 21:43:10.918440     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	
	
	==> storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] <==
	I0731 21:29:52.134752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:29:52.149066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:29:52.149191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:30:09.601573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:30:09.602006       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d151d35-bc05-48aa-ba90-b060f018e0df", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-755535_6c905fcc-95e2-4d3f-813c-dfa6507f7faa became leader
	I0731 21:30:09.602239       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-755535_6c905fcc-95e2-4d3f-813c-dfa6507f7faa!
	I0731 21:30:09.702759       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-755535_6c905fcc-95e2-4d3f-813c-dfa6507f7faa!
	
	
	==> storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] <==
	I0731 21:29:51.400055       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 21:29:51.402208       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-755535 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-968kv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-755535 describe pod metrics-server-569cc877fc-968kv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-755535 describe pod metrics-server-569cc877fc-968kv: exit status 1 (85.276854ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-968kv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-755535 describe pod metrics-server-569cc877fc-968kv: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0731 21:34:31.357617 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-563652 -n embed-certs-563652
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:43:17.590232013 +0000 UTC m=+5638.748663163
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-563652 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-563652 logs -n 25: (2.354301924s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-238338                              | cert-expiration-238338       | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-018891             | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-018891                                   | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-563652            | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275462        | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-318420 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | disable-driver-mounts-318420                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:24 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-018891                  | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-018891 --memory=2200                     | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:34 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-755535  | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC | 31 Jul 24 21:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-563652                 | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275462             | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-755535       | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC | 31 Jul 24 21:34 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:27:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:27:26.030260 1148013 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:27:26.030388 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030397 1148013 out.go:304] Setting ErrFile to fd 2...
	I0731 21:27:26.030401 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030608 1148013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:27:26.031249 1148013 out.go:298] Setting JSON to false
	I0731 21:27:26.032356 1148013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18597,"bootTime":1722442649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:27:26.032418 1148013 start.go:139] virtualization: kvm guest
	I0731 21:27:26.034938 1148013 out.go:177] * [default-k8s-diff-port-755535] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:27:26.036482 1148013 notify.go:220] Checking for updates...
	I0731 21:27:26.036489 1148013 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:27:26.038147 1148013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:27:26.039588 1148013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:27:26.040948 1148013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:27:26.042283 1148013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:27:26.043447 1148013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:27:26.045210 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:27:26.045675 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.045758 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.061309 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0731 21:27:26.061780 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.062491 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.062533 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.062921 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.063189 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.063482 1148013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:27:26.063794 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.063834 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.079162 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0731 21:27:26.079645 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.080157 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.080183 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.080542 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.080745 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.118664 1148013 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:27:26.120036 1148013 start.go:297] selected driver: kvm2
	I0731 21:27:26.120101 1148013 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.120220 1148013 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:27:26.120963 1148013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.121063 1148013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:27:26.137571 1148013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:27:26.137997 1148013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:27:26.138052 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:27:26.138065 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:27:26.138143 1148013 start.go:340] cluster config:
	{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.138260 1148013 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.140210 1148013 out.go:177] * Starting "default-k8s-diff-port-755535" primary control-plane node in "default-k8s-diff-port-755535" cluster
	I0731 21:27:26.141439 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:27:26.141487 1148013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:27:26.141498 1148013 cache.go:56] Caching tarball of preloaded images
	I0731 21:27:26.141586 1148013 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:27:26.141597 1148013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:27:26.141693 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:27:26.141896 1148013 start.go:360] acquireMachinesLock for default-k8s-diff-port-755535: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:27:27.008495 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:30.080584 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:36.160478 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:39.232498 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:45.312414 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:48.384471 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:54.464384 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:57.536420 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:03.616434 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:06.688387 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:12.768424 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:15.840395 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:21.920383 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:24.992412 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:31.072430 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:34.144440 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:37.147856 1147232 start.go:364] duration metric: took 3m32.571011548s to acquireMachinesLock for "embed-certs-563652"
	I0731 21:28:37.147925 1147232 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:37.147931 1147232 fix.go:54] fixHost starting: 
	I0731 21:28:37.148287 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:37.148321 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:37.164497 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0731 21:28:37.164970 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:37.165488 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:28:37.165514 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:37.165980 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:37.166236 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:37.166440 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:28:37.168379 1147232 fix.go:112] recreateIfNeeded on embed-certs-563652: state=Stopped err=<nil>
	I0731 21:28:37.168407 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	W0731 21:28:37.168605 1147232 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:37.170589 1147232 out.go:177] * Restarting existing kvm2 VM for "embed-certs-563652" ...
	I0731 21:28:37.171953 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Start
	I0731 21:28:37.172181 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring networks are active...
	I0731 21:28:37.173124 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network default is active
	I0731 21:28:37.173407 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network mk-embed-certs-563652 is active
	I0731 21:28:37.173963 1147232 main.go:141] libmachine: (embed-certs-563652) Getting domain xml...
	I0731 21:28:37.174662 1147232 main.go:141] libmachine: (embed-certs-563652) Creating domain...
	I0731 21:28:38.412401 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting to get IP...
	I0731 21:28:38.413198 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.413705 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.413848 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.413679 1148299 retry.go:31] will retry after 259.485128ms: waiting for machine to come up
	I0731 21:28:38.675408 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.675997 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.676020 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.675947 1148299 retry.go:31] will retry after 335.618163ms: waiting for machine to come up
	I0731 21:28:39.013788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.014375 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.014410 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.014338 1148299 retry.go:31] will retry after 367.833515ms: waiting for machine to come up
	I0731 21:28:39.383927 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.384304 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.384330 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.384282 1148299 retry.go:31] will retry after 399.641643ms: waiting for machine to come up
	I0731 21:28:37.145377 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:37.145426 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.145841 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:28:37.145876 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.146110 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:28:37.147660 1146656 machine.go:97] duration metric: took 4m34.558419201s to provisionDockerMachine
	I0731 21:28:37.147745 1146656 fix.go:56] duration metric: took 4m34.586940428s for fixHost
	I0731 21:28:37.147761 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 4m34.586994448s
	W0731 21:28:37.147782 1146656 start.go:714] error starting host: provision: host is not running
	W0731 21:28:37.147896 1146656 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 21:28:37.147905 1146656 start.go:729] Will try again in 5 seconds ...
	I0731 21:28:39.785994 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.786532 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.786564 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.786477 1148299 retry.go:31] will retry after 734.925372ms: waiting for machine to come up
	I0731 21:28:40.523580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:40.523946 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:40.523976 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:40.523897 1148299 retry.go:31] will retry after 588.684081ms: waiting for machine to come up
	I0731 21:28:41.113730 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:41.114237 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:41.114269 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:41.114163 1148299 retry.go:31] will retry after 937.611465ms: waiting for machine to come up
	I0731 21:28:42.053276 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:42.053607 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:42.053631 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:42.053567 1148299 retry.go:31] will retry after 1.025772158s: waiting for machine to come up
	I0731 21:28:43.081306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:43.081710 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:43.081739 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:43.081649 1148299 retry.go:31] will retry after 1.677045484s: waiting for machine to come up
	I0731 21:28:42.148804 1146656 start.go:360] acquireMachinesLock for no-preload-018891: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:28:44.761328 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:44.761956 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:44.761982 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:44.761903 1148299 retry.go:31] will retry after 2.317638211s: waiting for machine to come up
	I0731 21:28:47.081357 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:47.081798 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:47.081821 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:47.081742 1148299 retry.go:31] will retry after 2.614024076s: waiting for machine to come up
	I0731 21:28:49.697308 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:49.697764 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:49.697788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:49.697724 1148299 retry.go:31] will retry after 2.673090887s: waiting for machine to come up
	I0731 21:28:52.372166 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:52.372536 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:52.372567 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:52.372480 1148299 retry.go:31] will retry after 3.507450288s: waiting for machine to come up
	I0731 21:28:57.157052 1147424 start.go:364] duration metric: took 3m42.182815583s to acquireMachinesLock for "old-k8s-version-275462"
	I0731 21:28:57.157149 1147424 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:57.157159 1147424 fix.go:54] fixHost starting: 
	I0731 21:28:57.157580 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:57.157635 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:57.177971 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 21:28:57.178444 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:57.179070 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:28:57.179105 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:57.179414 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:57.179640 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:28:57.179803 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetState
	I0731 21:28:57.181518 1147424 fix.go:112] recreateIfNeeded on old-k8s-version-275462: state=Stopped err=<nil>
	I0731 21:28:57.181566 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	W0731 21:28:57.181776 1147424 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:57.184336 1147424 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275462" ...
	I0731 21:28:55.884290 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.884864 1147232 main.go:141] libmachine: (embed-certs-563652) Found IP for machine: 192.168.50.203
	I0731 21:28:55.884893 1147232 main.go:141] libmachine: (embed-certs-563652) Reserving static IP address...
	I0731 21:28:55.884911 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has current primary IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.885425 1147232 main.go:141] libmachine: (embed-certs-563652) Reserved static IP address: 192.168.50.203
	I0731 21:28:55.885445 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting for SSH to be available...
	I0731 21:28:55.885479 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.885500 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | skip adding static IP to network mk-embed-certs-563652 - found existing host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"}
	I0731 21:28:55.885515 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Getting to WaitForSSH function...
	I0731 21:28:55.887696 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888052 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.888109 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888279 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH client type: external
	I0731 21:28:55.888310 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa (-rw-------)
	I0731 21:28:55.888353 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:28:55.888371 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | About to run SSH command:
	I0731 21:28:55.888387 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | exit 0
	I0731 21:28:56.012306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | SSH cmd err, output: <nil>: 
	I0731 21:28:56.012807 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetConfigRaw
	I0731 21:28:56.013549 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.016243 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.016629 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016925 1147232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/config.json ...
	I0731 21:28:56.017152 1147232 machine.go:94] provisionDockerMachine start ...
	I0731 21:28:56.017173 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.017431 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.019693 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020075 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.020124 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020296 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.020489 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020606 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020705 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.020835 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.021131 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.021143 1147232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:28:56.120421 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:28:56.120455 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.120874 1147232 buildroot.go:166] provisioning hostname "embed-certs-563652"
	I0731 21:28:56.120911 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.121185 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.124050 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124509 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.124548 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124693 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.124936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125120 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125300 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.125456 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.125645 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.125660 1147232 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-563652 && echo "embed-certs-563652" | sudo tee /etc/hostname
	I0731 21:28:56.237674 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-563652
	
	I0731 21:28:56.237709 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.240783 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.241212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241460 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.241660 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.241850 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.242009 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.242230 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.242458 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.242479 1147232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-563652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-563652/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-563652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:28:56.353104 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:56.353138 1147232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:28:56.353165 1147232 buildroot.go:174] setting up certificates
	I0731 21:28:56.353180 1147232 provision.go:84] configureAuth start
	I0731 21:28:56.353193 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.353590 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.356346 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356736 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.356767 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356921 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.359016 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359319 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.359364 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359530 1147232 provision.go:143] copyHostCerts
	I0731 21:28:56.359595 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:28:56.359605 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:28:56.359674 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:28:56.359763 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:28:56.359772 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:28:56.359795 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:28:56.359858 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:28:56.359864 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:28:56.359886 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:28:56.359961 1147232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.embed-certs-563652 san=[127.0.0.1 192.168.50.203 embed-certs-563652 localhost minikube]
	I0731 21:28:56.517263 1147232 provision.go:177] copyRemoteCerts
	I0731 21:28:56.517324 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:28:56.517355 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.519965 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520292 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.520326 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520523 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.520745 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.520956 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.521090 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:56.602671 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:28:56.626882 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:28:56.651212 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:28:56.674469 1147232 provision.go:87] duration metric: took 321.274463ms to configureAuth
	I0731 21:28:56.674505 1147232 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:28:56.674734 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:28:56.674830 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.677835 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.678215 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678375 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.678563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678741 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678898 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.679075 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.679259 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.679275 1147232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:28:56.930106 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:28:56.930136 1147232 machine.go:97] duration metric: took 912.97079ms to provisionDockerMachine
	I0731 21:28:56.930148 1147232 start.go:293] postStartSetup for "embed-certs-563652" (driver="kvm2")
	I0731 21:28:56.930159 1147232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:28:56.930177 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.930534 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:28:56.930563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.933241 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933656 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.933689 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933795 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.934062 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.934228 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.934372 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.015059 1147232 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:28:57.019339 1147232 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:28:57.019376 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:28:57.019472 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:28:57.019581 1147232 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:28:57.019680 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:28:57.029381 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:28:57.052530 1147232 start.go:296] duration metric: took 122.364505ms for postStartSetup
	I0731 21:28:57.052583 1147232 fix.go:56] duration metric: took 19.904651181s for fixHost
	I0731 21:28:57.052612 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.055423 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.055802 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.055852 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.056142 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.056343 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056494 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056668 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.056844 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:57.057017 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:57.057028 1147232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:28:57.156776 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461337.115873615
	
	I0731 21:28:57.156816 1147232 fix.go:216] guest clock: 1722461337.115873615
	I0731 21:28:57.156847 1147232 fix.go:229] Guest: 2024-07-31 21:28:57.115873615 +0000 UTC Remote: 2024-07-31 21:28:57.05258776 +0000 UTC m=+232.627404404 (delta=63.285855ms)
	I0731 21:28:57.156883 1147232 fix.go:200] guest clock delta is within tolerance: 63.285855ms
	I0731 21:28:57.156901 1147232 start.go:83] releasing machines lock for "embed-certs-563652", held for 20.008989513s
	I0731 21:28:57.156936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.157244 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:57.159882 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160307 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.160334 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160545 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161086 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161266 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161349 1147232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:28:57.161394 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.161460 1147232 ssh_runner.go:195] Run: cat /version.json
	I0731 21:28:57.161481 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.164126 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164511 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.164552 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164583 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164719 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.164942 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165001 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.165022 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.165106 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165194 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.165277 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.165369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165536 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165692 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.261717 1147232 ssh_runner.go:195] Run: systemctl --version
	I0731 21:28:57.267459 1147232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:28:57.412757 1147232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:28:57.418248 1147232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:28:57.418317 1147232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:28:57.437752 1147232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:28:57.437786 1147232 start.go:495] detecting cgroup driver to use...
	I0731 21:28:57.437874 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:28:57.456832 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:28:57.472719 1147232 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:28:57.472803 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:28:57.486630 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:28:57.500635 1147232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:28:57.626291 1147232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:28:57.775374 1147232 docker.go:233] disabling docker service ...
	I0731 21:28:57.775563 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:28:57.789797 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:28:57.803545 1147232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:28:57.944871 1147232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:28:58.088067 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:28:58.112885 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:28:58.133234 1147232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:28:58.133301 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.144149 1147232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:28:58.144234 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.154684 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.165572 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.176638 1147232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:28:58.187948 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.198949 1147232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.219594 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.230888 1147232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:28:58.241112 1147232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:28:58.241175 1147232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:28:58.255158 1147232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:28:58.265191 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:28:58.401923 1147232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:28:58.534900 1147232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:28:58.534980 1147232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:28:58.539618 1147232 start.go:563] Will wait 60s for crictl version
	I0731 21:28:58.539700 1147232 ssh_runner.go:195] Run: which crictl
	I0731 21:28:58.543605 1147232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:28:58.578544 1147232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:28:58.578653 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.608074 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.638975 1147232 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:28:58.640454 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:58.643630 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644168 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:58.644204 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644497 1147232 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 21:28:58.648555 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:28:58.661131 1147232 kubeadm.go:883] updating cluster {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:28:58.661262 1147232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:28:58.661307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:28:58.696977 1147232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:28:58.697058 1147232 ssh_runner.go:195] Run: which lz4
	I0731 21:28:58.700913 1147232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:28:58.705097 1147232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:28:58.705135 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:28:57.185854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .Start
	I0731 21:28:57.186093 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring networks are active...
	I0731 21:28:57.186915 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network default is active
	I0731 21:28:57.187268 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network mk-old-k8s-version-275462 is active
	I0731 21:28:57.187627 1147424 main.go:141] libmachine: (old-k8s-version-275462) Getting domain xml...
	I0731 21:28:57.188447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:28:58.502711 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting to get IP...
	I0731 21:28:58.503791 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.504272 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.504341 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.504250 1148436 retry.go:31] will retry after 309.193175ms: waiting for machine to come up
	I0731 21:28:58.815172 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.815690 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.815745 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.815657 1148436 retry.go:31] will retry after 271.329404ms: waiting for machine to come up
	I0731 21:28:59.089281 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.089738 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.089778 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.089705 1148436 retry.go:31] will retry after 354.250517ms: waiting for machine to come up
	I0731 21:28:59.445390 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.445869 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.445895 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.445823 1148436 retry.go:31] will retry after 434.740787ms: waiting for machine to come up
	I0731 21:29:00.142120 1147232 crio.go:462] duration metric: took 1.441232682s to copy over tarball
	I0731 21:29:00.142222 1147232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:02.454101 1147232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.311834948s)
	I0731 21:29:02.454139 1147232 crio.go:469] duration metric: took 2.311975688s to extract the tarball
	I0731 21:29:02.454150 1147232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:02.493307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:02.541225 1147232 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:02.541257 1147232 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:02.541268 1147232 kubeadm.go:934] updating node { 192.168.50.203 8443 v1.30.3 crio true true} ...
	I0731 21:29:02.541448 1147232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-563652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:02.541548 1147232 ssh_runner.go:195] Run: crio config
	I0731 21:29:02.586951 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:02.586976 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:02.586989 1147232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:02.587016 1147232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.203 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-563652 NodeName:embed-certs-563652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:02.587188 1147232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-563652"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:02.587287 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:02.598944 1147232 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:02.599041 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:02.610271 1147232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0731 21:29:02.627952 1147232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:02.644727 1147232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0731 21:29:02.661985 1147232 ssh_runner.go:195] Run: grep 192.168.50.203	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:02.665903 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:02.678010 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:02.809768 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:02.826650 1147232 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652 for IP: 192.168.50.203
	I0731 21:29:02.826682 1147232 certs.go:194] generating shared ca certs ...
	I0731 21:29:02.826704 1147232 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:02.826923 1147232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:02.826988 1147232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:02.827005 1147232 certs.go:256] generating profile certs ...
	I0731 21:29:02.827126 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/client.key
	I0731 21:29:02.827208 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key.0963b177
	I0731 21:29:02.827279 1147232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key
	I0731 21:29:02.827458 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:02.827515 1147232 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:02.827533 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:02.827563 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:02.827598 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:02.827630 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:02.827690 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:02.828735 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:02.862923 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:02.907648 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:02.950647 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:02.978032 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 21:29:03.007119 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:29:03.031483 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:03.055190 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:03.079296 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:03.102817 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:03.126115 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:03.149887 1147232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:03.167213 1147232 ssh_runner.go:195] Run: openssl version
	I0731 21:29:03.172827 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:03.183821 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188216 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188290 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.193896 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:03.204706 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:03.215687 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220061 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220148 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.226469 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:03.237668 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:03.248629 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.252962 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.253032 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.258590 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:03.269656 1147232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:03.274277 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:03.280438 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:03.286378 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:03.292717 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:03.298776 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:03.305022 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:03.311507 1147232 kubeadm.go:392] StartCluster: {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:03.311608 1147232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:03.311676 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.349359 1147232 cri.go:89] found id: ""
	I0731 21:29:03.349457 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:03.359993 1147232 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:03.360015 1147232 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:03.360058 1147232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:03.371322 1147232 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:03.372350 1147232 kubeconfig.go:125] found "embed-certs-563652" server: "https://192.168.50.203:8443"
	I0731 21:29:03.374391 1147232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:03.386008 1147232 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.203
	I0731 21:29:03.386053 1147232 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:03.386069 1147232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:03.386141 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.428902 1147232 cri.go:89] found id: ""
	I0731 21:29:03.429001 1147232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:03.445950 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:03.455917 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:03.455954 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:03.456007 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:03.465688 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:03.465757 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:03.475699 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:03.485103 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:03.485179 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:03.495141 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.504430 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:03.504532 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.514523 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:03.524199 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:03.524280 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:03.533924 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:03.546105 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:03.656770 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:28:59.882326 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.882926 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.882959 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.882880 1148436 retry.go:31] will retry after 563.345278ms: waiting for machine to come up
	I0731 21:29:00.447702 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:00.448213 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:00.448245 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:00.448155 1148436 retry.go:31] will retry after 605.062991ms: waiting for machine to come up
	I0731 21:29:01.055120 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.055541 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.055564 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.055484 1148436 retry.go:31] will retry after 781.785142ms: waiting for machine to come up
	I0731 21:29:01.838536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.839123 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.839148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.839075 1148436 retry.go:31] will retry after 1.037287171s: waiting for machine to come up
	I0731 21:29:02.878421 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:02.878828 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:02.878860 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:02.878794 1148436 retry.go:31] will retry after 1.796829213s: waiting for machine to come up
	I0731 21:29:04.677338 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:04.677928 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:04.677963 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:04.677848 1148436 retry.go:31] will retry after 2.083632912s: waiting for machine to come up
	I0731 21:29:04.982138 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.325308339s)
	I0731 21:29:04.982177 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.196591 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.261920 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.343027 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:05.343137 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:05.844024 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.344246 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.360837 1147232 api_server.go:72] duration metric: took 1.017810929s to wait for apiserver process to appear ...
	I0731 21:29:06.360880 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:06.360916 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:06.361563 1147232 api_server.go:269] stopped: https://192.168.50.203:8443/healthz: Get "https://192.168.50.203:8443/healthz": dial tcp 192.168.50.203:8443: connect: connection refused
	I0731 21:29:06.861091 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.297633 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.297674 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.297691 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.335524 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.335568 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.361820 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.374624 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.374671 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:06.764436 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:06.764979 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:06.765012 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:06.764918 1148436 retry.go:31] will retry after 2.092811182s: waiting for machine to come up
	I0731 21:29:08.860056 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:08.860536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:08.860571 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:08.860498 1148436 retry.go:31] will retry after 2.731015709s: waiting for machine to come up
	I0731 21:29:09.861443 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.865941 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.865978 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.361710 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.365984 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:10.366014 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.861702 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.866015 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:29:10.872799 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:10.872831 1147232 api_server.go:131] duration metric: took 4.511944174s to wait for apiserver health ...
	I0731 21:29:10.872842 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:10.872848 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:10.874719 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:10.876229 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:10.886256 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:10.903893 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:10.913974 1147232 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:10.914021 1147232 system_pods.go:61] "coredns-7db6d8ff4d-kscsg" [260d2d5f-fd44-4a0a-813b-fab424728e55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:10.914031 1147232 system_pods.go:61] "etcd-embed-certs-563652" [e278abd0-801d-4156-bcc4-8f0d35a34b2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:10.914045 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [1398c865-6871-45c2-ad93-45b629d1d3c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:10.914055 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [0fbefc31-9024-41cb-b56a-944add33a901] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:10.914066 1147232 system_pods.go:61] "kube-proxy-m4www" [cb2d9b36-d71f-4986-9fb1-547e76fd2e77] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:10.914076 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [15887051-7657-4bf6-a9ca-3d834d8eb4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:10.914089 1147232 system_pods.go:61] "metrics-server-569cc877fc-6jkw9" [eb41d2c6-c267-486d-83eb-25e5578b1e6e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:10.914100 1147232 system_pods.go:61] "storage-provisioner" [5fc70da7-6dac-4e44-865c-495fd5fec485] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:10.914112 1147232 system_pods.go:74] duration metric: took 10.188078ms to wait for pod list to return data ...
	I0731 21:29:10.914125 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:10.917224 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:10.917258 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:10.917272 1147232 node_conditions.go:105] duration metric: took 3.140281ms to run NodePressure ...
	I0731 21:29:10.917294 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:11.176463 1147232 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180506 1147232 kubeadm.go:739] kubelet initialised
	I0731 21:29:11.180529 1147232 kubeadm.go:740] duration metric: took 4.03724ms waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180540 1147232 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:11.185366 1147232 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:13.197693 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:11.594836 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:11.595339 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:11.595374 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:11.595293 1148436 retry.go:31] will retry after 4.520307648s: waiting for machine to come up
	I0731 21:29:17.633145 1148013 start.go:364] duration metric: took 1m51.491197772s to acquireMachinesLock for "default-k8s-diff-port-755535"
	I0731 21:29:17.633242 1148013 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:17.633255 1148013 fix.go:54] fixHost starting: 
	I0731 21:29:17.633764 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:17.633823 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:17.654593 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0731 21:29:17.655124 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:17.655734 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:17.655770 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:17.656109 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:17.656359 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:17.656530 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:17.658542 1148013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-755535: state=Stopped err=<nil>
	I0731 21:29:17.658585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	W0731 21:29:17.658784 1148013 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:17.660580 1148013 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-755535" ...
	I0731 21:29:16.120431 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120937 1147424 main.go:141] libmachine: (old-k8s-version-275462) Found IP for machine: 192.168.72.107
	I0731 21:29:16.120961 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has current primary IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120968 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserving static IP address...
	I0731 21:29:16.121466 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.121508 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserved static IP address: 192.168.72.107
	I0731 21:29:16.121528 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | skip adding static IP to network mk-old-k8s-version-275462 - found existing host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"}
	I0731 21:29:16.121561 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Getting to WaitForSSH function...
	I0731 21:29:16.121599 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting for SSH to be available...
	I0731 21:29:16.123460 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123825 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.123849 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123954 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH client type: external
	I0731 21:29:16.123988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa (-rw-------)
	I0731 21:29:16.124019 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:16.124034 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | About to run SSH command:
	I0731 21:29:16.124049 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | exit 0
	I0731 21:29:16.244331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:16.244741 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:29:16.245387 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.248072 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248502 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.248529 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248857 1147424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:29:16.249132 1147424 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:16.249162 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:16.249412 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.252283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252657 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.252687 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.253096 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253286 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253433 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.253606 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.253875 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.253895 1147424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:16.356702 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:16.356743 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357088 1147424 buildroot.go:166] provisioning hostname "old-k8s-version-275462"
	I0731 21:29:16.357116 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357303 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.361044 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361504 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.361540 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361801 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.362037 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362252 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362430 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.362618 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.362866 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.362884 1147424 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275462 && echo "old-k8s-version-275462" | sudo tee /etc/hostname
	I0731 21:29:16.478590 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275462
	
	I0731 21:29:16.478635 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.481767 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.482184 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482467 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.482716 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.482888 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.483083 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.483323 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.483529 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.483554 1147424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275462/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:16.597465 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:16.597515 1147424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:16.597549 1147424 buildroot.go:174] setting up certificates
	I0731 21:29:16.597563 1147424 provision.go:84] configureAuth start
	I0731 21:29:16.597578 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.597901 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.600943 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601347 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.601388 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601582 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.604296 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604757 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.604787 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604950 1147424 provision.go:143] copyHostCerts
	I0731 21:29:16.605019 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:16.605037 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:16.605108 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:16.605235 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:16.605249 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:16.605285 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:16.605370 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:16.605381 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:16.605407 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:16.605474 1147424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275462 san=[127.0.0.1 192.168.72.107 localhost minikube old-k8s-version-275462]
	I0731 21:29:16.959571 1147424 provision.go:177] copyRemoteCerts
	I0731 21:29:16.959637 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:16.959671 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.962543 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.962955 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.962988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.963253 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.963483 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.963690 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.963885 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.047050 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:17.072833 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 21:29:17.099214 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:17.125846 1147424 provision.go:87] duration metric: took 528.260173ms to configureAuth
	I0731 21:29:17.125892 1147424 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:17.126109 1147424 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:29:17.126194 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.129283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129568 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.129602 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129926 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.130232 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130458 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130601 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.130820 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.131002 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.131016 1147424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:17.395537 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:17.395569 1147424 machine.go:97] duration metric: took 1.146418308s to provisionDockerMachine
	I0731 21:29:17.395581 1147424 start.go:293] postStartSetup for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:29:17.395598 1147424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:17.395639 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.395987 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:17.396024 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.398916 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399233 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.399264 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.399674 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.399854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.400026 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.483331 1147424 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:17.487820 1147424 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:17.487856 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:17.487925 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:17.488012 1147424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:17.488186 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:17.499484 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:17.525699 1147424 start.go:296] duration metric: took 130.099417ms for postStartSetup
	I0731 21:29:17.525756 1147424 fix.go:56] duration metric: took 20.368597161s for fixHost
	I0731 21:29:17.525785 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.529040 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529525 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.529570 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.530095 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530310 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530481 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.530704 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.530879 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.530890 1147424 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:17.632991 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461357.608223429
	
	I0731 21:29:17.633011 1147424 fix.go:216] guest clock: 1722461357.608223429
	I0731 21:29:17.633018 1147424 fix.go:229] Guest: 2024-07-31 21:29:17.608223429 +0000 UTC Remote: 2024-07-31 21:29:17.525761122 +0000 UTC m=+242.704537445 (delta=82.462307ms)
	I0731 21:29:17.633040 1147424 fix.go:200] guest clock delta is within tolerance: 82.462307ms
	I0731 21:29:17.633045 1147424 start.go:83] releasing machines lock for "old-k8s-version-275462", held for 20.475925282s
	I0731 21:29:17.633069 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.633360 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:17.636188 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636565 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.636598 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636792 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637346 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637569 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637674 1147424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:17.637721 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.637831 1147424 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:17.637861 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.640574 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640772 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640966 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.640996 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641174 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641297 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.641331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641371 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641511 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641564 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641680 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641846 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641886 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.642184 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.716822 1147424 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:17.741404 1147424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:17.892700 1147424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:17.899143 1147424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:17.899252 1147424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:17.915997 1147424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:17.916032 1147424 start.go:495] detecting cgroup driver to use...
	I0731 21:29:17.916133 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:17.933847 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:17.948471 1147424 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:17.948565 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:17.963294 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:17.978417 1147424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:18.100521 1147424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:18.243022 1147424 docker.go:233] disabling docker service ...
	I0731 21:29:18.243104 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:18.258762 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:18.272012 1147424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:18.421137 1147424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:18.564600 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:18.581019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:18.601426 1147424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:29:18.601504 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.617312 1147424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:18.617400 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.631697 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.642487 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.654548 1147424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:18.666338 1147424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:18.676326 1147424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:18.676406 1147424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:18.690225 1147424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:18.702315 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:18.836795 1147424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:18.977840 1147424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:18.977930 1147424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:18.984979 1147424 start.go:563] Will wait 60s for crictl version
	I0731 21:29:18.985059 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:18.989654 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:19.033602 1147424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:19.033701 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.061583 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.093228 1147424 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:29:15.692077 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:18.191423 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:19.094804 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:19.098122 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.098620 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:19.098648 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.099016 1147424 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:19.103372 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:19.117035 1147424 kubeadm.go:883] updating cluster {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:19.117205 1147424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:29:19.117275 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:19.163252 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:19.163343 1147424 ssh_runner.go:195] Run: which lz4
	I0731 21:29:19.168173 1147424 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:19.172513 1147424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:19.172576 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:29:17.662009 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Start
	I0731 21:29:17.662245 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring networks are active...
	I0731 21:29:17.663121 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network default is active
	I0731 21:29:17.663583 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network mk-default-k8s-diff-port-755535 is active
	I0731 21:29:17.664059 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Getting domain xml...
	I0731 21:29:17.664837 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Creating domain...
	I0731 21:29:18.989801 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting to get IP...
	I0731 21:29:18.990936 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991376 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991428 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:18.991344 1148583 retry.go:31] will retry after 247.770384ms: waiting for machine to come up
	I0731 21:29:19.241063 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241658 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.241549 1148583 retry.go:31] will retry after 287.808437ms: waiting for machine to come up
	I0731 21:29:19.531237 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531849 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531875 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.531777 1148583 retry.go:31] will retry after 317.584035ms: waiting for machine to come up
	I0731 21:29:19.851691 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852167 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852202 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.852128 1148583 retry.go:31] will retry after 555.57435ms: waiting for machine to come up
	I0731 21:29:20.409812 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410356 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410392 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:20.410280 1148583 retry.go:31] will retry after 721.969177ms: waiting for machine to come up
	I0731 21:29:20.195383 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:20.703603 1147232 pod_ready.go:92] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.703634 1147232 pod_ready.go:81] duration metric: took 9.51823955s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.703649 1147232 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724000 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.724036 1147232 pod_ready.go:81] duration metric: took 20.374673ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724051 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732302 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.732326 1147232 pod_ready.go:81] duration metric: took 8.267565ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732340 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747581 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.747609 1147232 pod_ready.go:81] duration metric: took 2.015261928s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747619 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753322 1147232 pod_ready.go:92] pod "kube-proxy-m4www" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.753348 1147232 pod_ready.go:81] duration metric: took 5.72252ms for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753359 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758310 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.758335 1147232 pod_ready.go:81] duration metric: took 4.970124ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758346 1147232 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.731858 1147424 crio.go:462] duration metric: took 1.563734165s to copy over tarball
	I0731 21:29:20.732033 1147424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:23.813579 1147424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081445019s)
	I0731 21:29:23.813629 1147424 crio.go:469] duration metric: took 3.081657576s to extract the tarball
	I0731 21:29:23.813640 1147424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:23.855937 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:23.892640 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:23.892676 1147424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:23.892772 1147424 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.892797 1147424 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.892852 1147424 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.892776 1147424 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.893142 1147424 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.893240 1147424 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:29:23.893343 1147424 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.893348 1147424 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.894880 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.895111 1147424 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.894968 1147424 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.895194 1147424 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.895489 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.895587 1147424 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:29:24.036855 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.039761 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.042658 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.045088 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.045098 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.048688 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.088535 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:29:24.218808 1147424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:29:24.218845 1147424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:29:24.218881 1147424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:29:24.218918 1147424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.218930 1147424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:29:24.218936 1147424 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:29:24.218943 1147424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.218965 1147424 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.218978 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.219058 1147424 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:29:24.219078 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219079 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219084 1147424 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.219135 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238540 1147424 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:29:24.238602 1147424 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:29:24.238653 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238678 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.238697 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.238736 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.238794 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.238802 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.238851 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.366795 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:29:24.371307 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:29:24.371394 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:29:24.371436 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:29:24.371516 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:29:24.380026 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:29:24.380043 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:29:24.412112 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:29:24.523420 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:24.671943 1147424 cache_images.go:92] duration metric: took 779.240281ms to LoadCachedImages
	W0731 21:29:24.672078 1147424 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0731 21:29:24.672114 1147424 kubeadm.go:934] updating node { 192.168.72.107 8443 v1.20.0 crio true true} ...
	I0731 21:29:24.672267 1147424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:24.672897 1147424 ssh_runner.go:195] Run: crio config
	I0731 21:29:24.722662 1147424 cni.go:84] Creating CNI manager for ""
	I0731 21:29:24.722686 1147424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:24.722696 1147424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:24.722717 1147424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275462 NodeName:old-k8s-version-275462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:29:24.722892 1147424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275462"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:24.722962 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:29:24.733178 1147424 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:24.733273 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:24.743515 1147424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 21:29:24.760826 1147424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:24.779805 1147424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:29:24.798560 1147424 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:24.802406 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:24.815015 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:21.134251 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134731 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134764 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:21.134687 1148583 retry.go:31] will retry after 934.566416ms: waiting for machine to come up
	I0731 21:29:22.071038 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071631 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.071554 1148583 retry.go:31] will retry after 884.282326ms: waiting for machine to come up
	I0731 21:29:22.957241 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957617 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.957599 1148583 retry.go:31] will retry after 1.014946816s: waiting for machine to come up
	I0731 21:29:23.974435 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974845 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974883 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:23.974807 1148583 retry.go:31] will retry after 1.519800108s: waiting for machine to come up
	I0731 21:29:25.496770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497303 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:25.497249 1148583 retry.go:31] will retry after 1.739198883s: waiting for machine to come up
	I0731 21:29:24.767123 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:27.265952 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:29.266044 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:24.937628 1147424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:24.956917 1147424 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462 for IP: 192.168.72.107
	I0731 21:29:24.956949 1147424 certs.go:194] generating shared ca certs ...
	I0731 21:29:24.956972 1147424 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:24.957180 1147424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:24.957243 1147424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:24.957258 1147424 certs.go:256] generating profile certs ...
	I0731 21:29:24.957385 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key
	I0731 21:29:24.957468 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421
	I0731 21:29:24.957520 1147424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key
	I0731 21:29:24.957676 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:24.957719 1147424 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:24.957734 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:24.957770 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:24.957805 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:24.957837 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:24.957898 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:24.958772 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:24.998159 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:25.057520 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:25.098374 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:25.140601 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:29:25.187540 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:25.213821 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:25.240997 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:25.266970 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:25.292340 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:25.318838 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:25.344071 1147424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:25.361756 1147424 ssh_runner.go:195] Run: openssl version
	I0731 21:29:25.368009 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:25.379741 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.384975 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.385052 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.390894 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:25.403007 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:25.415067 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422223 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422310 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.429842 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:25.440874 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:25.451684 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456190 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456259 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.462311 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:25.474253 1147424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:25.479088 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:25.485188 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:25.491404 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:25.498223 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:25.504935 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:25.511202 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:25.517628 1147424 kubeadm.go:392] StartCluster: {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:25.517767 1147424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:25.517832 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.555145 1147424 cri.go:89] found id: ""
	I0731 21:29:25.555227 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:25.565732 1147424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:25.565758 1147424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:25.565821 1147424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:25.575700 1147424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:25.576730 1147424 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:25.577437 1147424 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-275462" cluster setting kubeconfig missing "old-k8s-version-275462" context setting]
	I0731 21:29:25.578357 1147424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:25.626975 1147424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:25.637707 1147424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.107
	I0731 21:29:25.637758 1147424 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:25.637773 1147424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:25.637826 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.674153 1147424 cri.go:89] found id: ""
	I0731 21:29:25.674240 1147424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:25.692354 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:25.703047 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:25.703081 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:25.703140 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:25.712766 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:25.712884 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:25.723121 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:25.732767 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:25.732846 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:25.743055 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.752622 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:25.752699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.763763 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:25.773620 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:25.773699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:25.784175 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:25.794182 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:25.908515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.676104 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.891081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.024837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.100397 1147424 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:27.100499 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.600582 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.101391 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.601068 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.101502 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.600838 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.239418 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239872 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:27.239806 1148583 retry.go:31] will retry after 1.907805681s: waiting for machine to come up
	I0731 21:29:29.149605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150022 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150049 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:29.149966 1148583 retry.go:31] will retry after 3.584697795s: waiting for machine to come up
	I0731 21:29:31.765270 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:34.264994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:30.101071 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:30.601377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.100907 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.600736 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.100741 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.601406 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.100616 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.601476 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.101619 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.601270 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.736055 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736539 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:32.736495 1148583 retry.go:31] will retry after 4.026783834s: waiting for machine to come up
	I0731 21:29:38.016998 1146656 start.go:364] duration metric: took 55.868098686s to acquireMachinesLock for "no-preload-018891"
	I0731 21:29:38.017060 1146656 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:38.017069 1146656 fix.go:54] fixHost starting: 
	I0731 21:29:38.017509 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:38.017552 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:38.036034 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0731 21:29:38.036681 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:38.037291 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:29:38.037319 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:38.037687 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:38.037920 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:38.038078 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:29:38.040079 1146656 fix.go:112] recreateIfNeeded on no-preload-018891: state=Stopped err=<nil>
	I0731 21:29:38.040133 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	W0731 21:29:38.040317 1146656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:38.042575 1146656 out.go:177] * Restarting existing kvm2 VM for "no-preload-018891" ...
	I0731 21:29:36.766344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:39.265931 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:36.767067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767688 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has current primary IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767744 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Found IP for machine: 192.168.39.145
	I0731 21:29:36.767774 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserving static IP address...
	I0731 21:29:36.768193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.768234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | skip adding static IP to network mk-default-k8s-diff-port-755535 - found existing host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"}
	I0731 21:29:36.768256 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserved static IP address: 192.168.39.145
	I0731 21:29:36.768277 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for SSH to be available...
	I0731 21:29:36.768292 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Getting to WaitForSSH function...
	I0731 21:29:36.770423 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.770710 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770880 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH client type: external
	I0731 21:29:36.770909 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa (-rw-------)
	I0731 21:29:36.770966 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:36.770989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | About to run SSH command:
	I0731 21:29:36.771004 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | exit 0
	I0731 21:29:36.892321 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:36.892633 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetConfigRaw
	I0731 21:29:36.893372 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:36.896249 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896647 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.896682 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896983 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:29:36.897231 1148013 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:36.897253 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:36.897507 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:36.900381 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900794 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.900832 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900940 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:36.901137 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901403 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:36.901591 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:36.901809 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:36.901823 1148013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:37.004424 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:37.004459 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004749 1148013 buildroot.go:166] provisioning hostname "default-k8s-diff-port-755535"
	I0731 21:29:37.004770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.007987 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008391 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.008439 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.008802 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.008981 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.009190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.009374 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.009588 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.009602 1148013 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-755535 && echo "default-k8s-diff-port-755535" | sudo tee /etc/hostname
	I0731 21:29:37.127160 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-755535
	
	I0731 21:29:37.127190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.130282 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130701 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.130737 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130924 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.131178 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131389 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131537 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.131778 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.132017 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.132037 1148013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-755535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-755535/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-755535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:37.245157 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:37.245201 1148013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:37.245255 1148013 buildroot.go:174] setting up certificates
	I0731 21:29:37.245268 1148013 provision.go:84] configureAuth start
	I0731 21:29:37.245283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.245628 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:37.248611 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.248910 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.248944 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.249109 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.251332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251698 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.251727 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251911 1148013 provision.go:143] copyHostCerts
	I0731 21:29:37.251973 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:37.251983 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:37.252036 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:37.252164 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:37.252173 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:37.252196 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:37.252258 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:37.252265 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:37.252283 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:37.252334 1148013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-755535 san=[127.0.0.1 192.168.39.145 default-k8s-diff-port-755535 localhost minikube]
	I0731 21:29:37.356985 1148013 provision.go:177] copyRemoteCerts
	I0731 21:29:37.357046 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:37.357077 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.359635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.359985 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.360014 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.360217 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.360421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.360670 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.360815 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.442709 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:37.467795 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 21:29:37.492389 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:29:37.515837 1148013 provision.go:87] duration metric: took 270.547831ms to configureAuth
	I0731 21:29:37.515882 1148013 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:37.516070 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:37.516200 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.519062 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519432 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.519469 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519695 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.519920 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520141 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520323 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.520481 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.520701 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.520726 1148013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:37.780006 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:37.780033 1148013 machine.go:97] duration metric: took 882.786941ms to provisionDockerMachine
	I0731 21:29:37.780047 1148013 start.go:293] postStartSetup for "default-k8s-diff-port-755535" (driver="kvm2")
	I0731 21:29:37.780059 1148013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:37.780081 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:37.780459 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:37.780493 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.783495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.783853 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.783886 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.784068 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.784322 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.784531 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.784714 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.866990 1148013 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:37.871294 1148013 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:37.871329 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:37.871408 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:37.871483 1148013 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:37.871584 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:37.881107 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:37.906964 1148013 start.go:296] duration metric: took 126.897843ms for postStartSetup
	I0731 21:29:37.907016 1148013 fix.go:56] duration metric: took 20.273760895s for fixHost
	I0731 21:29:37.907045 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.910120 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910452 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.910495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910747 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.910965 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911119 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911255 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.911448 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.911690 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.911705 1148013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:38.016788 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461377.990571620
	
	I0731 21:29:38.016818 1148013 fix.go:216] guest clock: 1722461377.990571620
	I0731 21:29:38.016830 1148013 fix.go:229] Guest: 2024-07-31 21:29:37.99057162 +0000 UTC Remote: 2024-07-31 21:29:37.907020915 +0000 UTC m=+131.913986687 (delta=83.550705ms)
	I0731 21:29:38.016876 1148013 fix.go:200] guest clock delta is within tolerance: 83.550705ms
	I0731 21:29:38.016883 1148013 start.go:83] releasing machines lock for "default-k8s-diff-port-755535", held for 20.383695886s
	I0731 21:29:38.016916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.017234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:38.019995 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020405 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.020436 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020641 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021180 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021387 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021485 1148013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:38.021536 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.021665 1148013 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:38.021693 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.024445 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024777 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024913 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.024946 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025214 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025258 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.025291 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025461 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025626 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.025640 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025915 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025907 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.026067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.026237 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.129588 1148013 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:38.135557 1148013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:38.276230 1148013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:38.281894 1148013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:38.281977 1148013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:38.298709 1148013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:38.298742 1148013 start.go:495] detecting cgroup driver to use...
	I0731 21:29:38.298815 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:38.316212 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:38.331845 1148013 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:38.331925 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:38.350284 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:38.365411 1148013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:38.502379 1148013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:38.659435 1148013 docker.go:233] disabling docker service ...
	I0731 21:29:38.659544 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:38.676451 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:38.692936 1148013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:38.843766 1148013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:38.974723 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:38.989514 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:39.009753 1148013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:29:39.009822 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.020785 1148013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:39.020857 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.031679 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.047024 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.061692 1148013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:39.072901 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.084049 1148013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.101694 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.118920 1148013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:39.128796 1148013 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:39.128869 1148013 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:39.143329 1148013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:39.153376 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:39.278414 1148013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:39.427377 1148013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:39.427493 1148013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:39.432178 1148013 start.go:563] Will wait 60s for crictl version
	I0731 21:29:39.432262 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:29:39.435949 1148013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:39.470366 1148013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:39.470494 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.498247 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.531071 1148013 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:29:35.101055 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:35.600782 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.101344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.600794 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.101402 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.601198 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.100947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.601332 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.101351 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.601319 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.532416 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:39.535677 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536015 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:39.536046 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536341 1148013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:39.540305 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:39.553333 1148013 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:39.553464 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:29:39.553514 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:39.592137 1148013 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:29:39.592216 1148013 ssh_runner.go:195] Run: which lz4
	I0731 21:29:39.596215 1148013 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:39.600203 1148013 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:39.600244 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:29:41.004825 1148013 crio.go:462] duration metric: took 1.408653613s to copy over tarball
	I0731 21:29:41.004930 1148013 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:38.043667 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Start
	I0731 21:29:38.043892 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring networks are active...
	I0731 21:29:38.044764 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network default is active
	I0731 21:29:38.045177 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network mk-no-preload-018891 is active
	I0731 21:29:38.045594 1146656 main.go:141] libmachine: (no-preload-018891) Getting domain xml...
	I0731 21:29:38.046459 1146656 main.go:141] libmachine: (no-preload-018891) Creating domain...
	I0731 21:29:39.353762 1146656 main.go:141] libmachine: (no-preload-018891) Waiting to get IP...
	I0731 21:29:39.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.355279 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.355383 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.355255 1148782 retry.go:31] will retry after 234.245005ms: waiting for machine to come up
	I0731 21:29:39.590814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.591332 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.591358 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.591270 1148782 retry.go:31] will retry after 362.949809ms: waiting for machine to come up
	I0731 21:29:39.956112 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.956694 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.956721 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.956639 1148782 retry.go:31] will retry after 469.324659ms: waiting for machine to come up
	I0731 21:29:40.427518 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.427997 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.428027 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.427953 1148782 retry.go:31] will retry after 463.172567ms: waiting for machine to come up
	I0731 21:29:40.893318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.893864 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.893890 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.893824 1148782 retry.go:31] will retry after 599.834904ms: waiting for machine to come up
	I0731 21:29:41.495844 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:41.496342 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:41.496372 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:41.496291 1148782 retry.go:31] will retry after 856.360903ms: waiting for machine to come up
	I0731 21:29:41.266267 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:43.267009 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:40.101530 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:40.601303 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.100720 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.600723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.100890 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.100765 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.101217 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.356436 1148013 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.351465263s)
	I0731 21:29:43.356470 1148013 crio.go:469] duration metric: took 2.351606996s to extract the tarball
	I0731 21:29:43.356479 1148013 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:43.397583 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:43.443757 1148013 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:43.443784 1148013 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:43.443793 1148013 kubeadm.go:934] updating node { 192.168.39.145 8444 v1.30.3 crio true true} ...
	I0731 21:29:43.443954 1148013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-755535 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:43.444026 1148013 ssh_runner.go:195] Run: crio config
	I0731 21:29:43.494935 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:43.494959 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:43.494973 1148013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:43.495006 1148013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-755535 NodeName:default-k8s-diff-port-755535 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:43.495210 1148013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-755535"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:43.495303 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:43.505057 1148013 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:43.505176 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:43.514741 1148013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 21:29:43.534865 1148013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:43.554763 1148013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 21:29:43.572433 1148013 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:43.577403 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:43.592858 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:43.737530 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:43.754632 1148013 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535 for IP: 192.168.39.145
	I0731 21:29:43.754662 1148013 certs.go:194] generating shared ca certs ...
	I0731 21:29:43.754686 1148013 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:43.754900 1148013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:43.754960 1148013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:43.754976 1148013 certs.go:256] generating profile certs ...
	I0731 21:29:43.755093 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/client.key
	I0731 21:29:43.755177 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key.22420a8f
	I0731 21:29:43.755227 1148013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key
	I0731 21:29:43.755381 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:43.755424 1148013 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:43.755434 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:43.755455 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:43.755480 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:43.755500 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:43.755539 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:43.756235 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:43.800725 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:43.835648 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:43.880032 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:43.915459 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 21:29:43.943694 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:43.968578 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:43.993192 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:29:44.017364 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:44.041303 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:44.065792 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:44.089991 1148013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:44.107888 1148013 ssh_runner.go:195] Run: openssl version
	I0731 21:29:44.113758 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:44.125576 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130648 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130727 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.137311 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:44.149135 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:44.160439 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165263 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165329 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.171250 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:44.182798 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:44.194037 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198577 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198658 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.204406 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:44.215573 1148013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:44.221587 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:44.229391 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:44.237371 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:44.244379 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:44.250414 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:44.256557 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:44.262804 1148013 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:44.262928 1148013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:44.262993 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.298720 1148013 cri.go:89] found id: ""
	I0731 21:29:44.298826 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:44.310173 1148013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:44.310199 1148013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:44.310258 1148013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:44.321273 1148013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:44.322769 1148013 kubeconfig.go:125] found "default-k8s-diff-port-755535" server: "https://192.168.39.145:8444"
	I0731 21:29:44.325832 1148013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:44.336366 1148013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.145
	I0731 21:29:44.336407 1148013 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:44.336427 1148013 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:44.336498 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.383500 1148013 cri.go:89] found id: ""
	I0731 21:29:44.383591 1148013 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:44.399444 1148013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:44.410687 1148013 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:44.410711 1148013 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:44.410769 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 21:29:44.420845 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:44.420925 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:44.430476 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 21:29:44.440198 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:44.440277 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:44.450195 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.459883 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:44.459966 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.470649 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 21:29:44.480689 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:44.480764 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:44.490628 1148013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:44.501343 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:44.642878 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.555233 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.766976 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.832896 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.907410 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:45.907508 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.354282 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:42.354765 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:42.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:42.354694 1148782 retry.go:31] will retry after 1.044468751s: waiting for machine to come up
	I0731 21:29:43.400835 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:43.401345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:43.401402 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:43.401318 1148782 retry.go:31] will retry after 935.157631ms: waiting for machine to come up
	I0731 21:29:44.337853 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:44.338472 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:44.338505 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:44.338397 1148782 retry.go:31] will retry after 1.530891122s: waiting for machine to come up
	I0731 21:29:45.871035 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:45.871693 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:45.871734 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:45.871617 1148782 retry.go:31] will retry after 1.996010352s: waiting for machine to come up
	I0731 21:29:45.765589 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:47.765743 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:45.100963 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:45.601355 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.101354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.601416 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.100953 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.601551 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.100775 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.601528 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.101362 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.601101 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.407820 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.907790 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.924949 1148013 api_server.go:72] duration metric: took 1.017537991s to wait for apiserver process to appear ...
	I0731 21:29:46.924989 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:46.925016 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:49.933387 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:49.933431 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:49.933448 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.002123 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.002156 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.425320 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.430430 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.430465 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.926039 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.931251 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.931286 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:51.425157 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:51.430486 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:29:51.437067 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:51.437115 1148013 api_server.go:131] duration metric: took 4.512116778s to wait for apiserver health ...
	I0731 21:29:51.437131 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:51.437142 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:51.438770 1148013 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:47.869470 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:47.869928 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:47.869960 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:47.869867 1148782 retry.go:31] will retry after 1.758316686s: waiting for machine to come up
	I0731 21:29:49.630515 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:49.631000 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:49.631036 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:49.630936 1148782 retry.go:31] will retry after 2.39654611s: waiting for machine to come up
	I0731 21:29:51.440057 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:51.460432 1148013 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:51.479629 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:51.491000 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:51.491059 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:51.491076 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:51.491087 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:51.491110 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:51.491124 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:51.491139 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:51.491150 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:51.491222 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:51.491236 1148013 system_pods.go:74] duration metric: took 11.579003ms to wait for pod list to return data ...
	I0731 21:29:51.491252 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:51.495163 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:51.495206 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:51.495239 1148013 node_conditions.go:105] duration metric: took 3.977024ms to run NodePressure ...
	I0731 21:29:51.495263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:51.762752 1148013 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768504 1148013 kubeadm.go:739] kubelet initialised
	I0731 21:29:51.768541 1148013 kubeadm.go:740] duration metric: took 5.756089ms waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768554 1148013 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:51.776242 1148013 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.783488 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783533 1148013 pod_ready.go:81] duration metric: took 7.250424ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.783547 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783558 1148013 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.790100 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790143 1148013 pod_ready.go:81] duration metric: took 6.573129ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.790159 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.797457 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797498 1148013 pod_ready.go:81] duration metric: took 7.319359ms for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.797513 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797533 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.883109 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883149 1148013 pod_ready.go:81] duration metric: took 85.605451ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.883162 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.283454 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283484 1148013 pod_ready.go:81] duration metric: took 400.306586ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.283495 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283511 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.682926 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682965 1148013 pod_ready.go:81] duration metric: took 399.442627ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.682982 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682991 1148013 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:53.083528 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083573 1148013 pod_ready.go:81] duration metric: took 400.571455ms for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:53.083590 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083601 1148013 pod_ready.go:38] duration metric: took 1.315033985s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:53.083623 1148013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:29:53.095349 1148013 ops.go:34] apiserver oom_adj: -16
	I0731 21:29:53.095379 1148013 kubeadm.go:597] duration metric: took 8.785172139s to restartPrimaryControlPlane
	I0731 21:29:53.095391 1148013 kubeadm.go:394] duration metric: took 8.832597905s to StartCluster
	I0731 21:29:53.095416 1148013 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.095513 1148013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:53.097384 1148013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.097693 1148013 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:29:53.097768 1148013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:29:53.097863 1148013 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097905 1148013 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.097914 1148013 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:29:53.097918 1148013 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097949 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.097956 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:53.097964 1148013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-755535"
	I0731 21:29:53.097960 1148013 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.098052 1148013 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.098070 1148013 addons.go:243] addon metrics-server should already be in state true
	I0731 21:29:53.098129 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.098364 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098389 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098405 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098465 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098544 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098578 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.099612 1148013 out.go:177] * Verifying Kubernetes components...
	I0731 21:29:53.100943 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:53.116043 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0731 21:29:53.116121 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0731 21:29:53.116663 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.116670 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.117278 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117297 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117558 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117575 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117662 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.118320 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.118358 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.118788 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0731 21:29:53.118820 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.119468 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.119498 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.119509 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.120181 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.120208 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.120626 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.120828 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.125024 1148013 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.125051 1148013 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:29:53.125087 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.125470 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.125510 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.136521 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0731 21:29:53.137246 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.137866 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.137907 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.138331 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.138574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.140269 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0731 21:29:53.140615 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.140722 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.141377 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.141402 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.141846 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.142108 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.142832 1148013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:53.143979 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0731 21:29:53.144037 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.144302 1148013 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.144321 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:29:53.144342 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.145270 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.145539 1148013 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:29:49.766048 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:52.266842 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:53.145875 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.145898 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.146651 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.146842 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:29:53.146863 1148013 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:29:53.146891 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.147198 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.147235 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.148082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149156 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.149247 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149438 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.149635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.149758 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.149890 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.150082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150593 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.150624 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150825 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.151024 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.151193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.151423 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.164594 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0731 21:29:53.165088 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.165634 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.165649 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.165919 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.166093 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.167775 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.168002 1148013 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.168016 1148013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:29:53.168032 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.171696 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172236 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.172266 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172492 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.172717 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.172890 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.173081 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.313528 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:53.332410 1148013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:29:53.467443 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.481915 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:29:53.481943 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:29:53.503095 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.524005 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:29:53.524039 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:29:53.577476 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:53.577511 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:29:53.630711 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:54.451991 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452029 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452078 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452115 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452387 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452404 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452412 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452526 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452551 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452565 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452574 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452582 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452667 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452684 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452691 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452849 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452869 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.458865 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.458888 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.459191 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.459208 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472307 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472337 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.472690 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.472706 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.472713 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472733 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472742 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.473021 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.473070 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.473074 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.473086 1148013 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-755535"
	I0731 21:29:54.474920 1148013 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:29:50.101380 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:50.601347 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.101325 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.601381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.101364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.600852 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.101284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.601020 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.101330 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.601310 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.476085 1148013 addons.go:510] duration metric: took 1.378326564s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:29:55.338873 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.029262 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:52.029780 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:52.029807 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:52.029695 1148782 retry.go:31] will retry after 2.74211918s: waiting for machine to come up
	I0731 21:29:54.773318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.773762 1146656 main.go:141] libmachine: (no-preload-018891) Found IP for machine: 192.168.61.246
	I0731 21:29:54.773788 1146656 main.go:141] libmachine: (no-preload-018891) Reserving static IP address...
	I0731 21:29:54.773803 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has current primary IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.774221 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.774260 1146656 main.go:141] libmachine: (no-preload-018891) DBG | skip adding static IP to network mk-no-preload-018891 - found existing host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"}
	I0731 21:29:54.774275 1146656 main.go:141] libmachine: (no-preload-018891) Reserved static IP address: 192.168.61.246
	I0731 21:29:54.774320 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Getting to WaitForSSH function...
	I0731 21:29:54.774343 1146656 main.go:141] libmachine: (no-preload-018891) Waiting for SSH to be available...
	I0731 21:29:54.776952 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.777352 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777426 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH client type: external
	I0731 21:29:54.777466 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa (-rw-------)
	I0731 21:29:54.777506 1146656 main.go:141] libmachine: (no-preload-018891) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:54.777522 1146656 main.go:141] libmachine: (no-preload-018891) DBG | About to run SSH command:
	I0731 21:29:54.777564 1146656 main.go:141] libmachine: (no-preload-018891) DBG | exit 0
	I0731 21:29:54.908253 1146656 main.go:141] libmachine: (no-preload-018891) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:54.908614 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetConfigRaw
	I0731 21:29:54.909339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:54.911937 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.912345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912621 1146656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/config.json ...
	I0731 21:29:54.912837 1146656 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:54.912858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:54.913092 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:54.915328 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915698 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.915725 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915862 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:54.916060 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916209 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916385 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:54.916563 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:54.916797 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:54.916812 1146656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:55.032674 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:55.032715 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033152 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:29:55.033189 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033429 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.036142 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036488 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.036553 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036710 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.036938 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037373 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.037586 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.037851 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.037869 1146656 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-018891 && echo "no-preload-018891" | sudo tee /etc/hostname
	I0731 21:29:55.170895 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-018891
	
	I0731 21:29:55.170923 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.174018 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174357 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.174382 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174594 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.174835 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175025 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175153 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.175333 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.175578 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.175595 1146656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018891/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:55.296570 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:55.296606 1146656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:55.296634 1146656 buildroot.go:174] setting up certificates
	I0731 21:29:55.296645 1146656 provision.go:84] configureAuth start
	I0731 21:29:55.296658 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.297022 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:55.299891 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300300 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.300329 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300525 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.302808 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303146 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.303176 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303306 1146656 provision.go:143] copyHostCerts
	I0731 21:29:55.303365 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:55.303375 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:55.303430 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:55.303533 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:55.303541 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:55.303565 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:55.303638 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:55.303645 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:55.303662 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:55.303773 1146656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.no-preload-018891 san=[127.0.0.1 192.168.61.246 localhost minikube no-preload-018891]
	I0731 21:29:55.451740 1146656 provision.go:177] copyRemoteCerts
	I0731 21:29:55.451822 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:55.451858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.454972 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455327 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.455362 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455522 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.455783 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.455966 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.456166 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.541939 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:29:55.567967 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:55.593630 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:55.621511 1146656 provision.go:87] duration metric: took 324.845258ms to configureAuth
	I0731 21:29:55.621546 1146656 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:55.621737 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:29:55.621823 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.624639 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625021 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.625054 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625277 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.625515 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625755 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625921 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.626150 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.626404 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.626428 1146656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:55.896753 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:55.896785 1146656 machine.go:97] duration metric: took 983.934543ms to provisionDockerMachine
	I0731 21:29:55.896799 1146656 start.go:293] postStartSetup for "no-preload-018891" (driver="kvm2")
	I0731 21:29:55.896818 1146656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:55.896863 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:55.897196 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:55.897229 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.899769 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900156 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.900190 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.900612 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.900765 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.900903 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.987436 1146656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:55.991924 1146656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:55.991958 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:55.992027 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:55.992144 1146656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:55.992312 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:56.002524 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:56.026998 1146656 start.go:296] duration metric: took 130.182157ms for postStartSetup
	I0731 21:29:56.027046 1146656 fix.go:56] duration metric: took 18.009977848s for fixHost
	I0731 21:29:56.027071 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.029907 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030303 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.030324 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030493 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.030731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.030907 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.031055 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.031254 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:56.031490 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:56.031503 1146656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:56.149163 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461396.115095611
	
	I0731 21:29:56.149199 1146656 fix.go:216] guest clock: 1722461396.115095611
	I0731 21:29:56.149211 1146656 fix.go:229] Guest: 2024-07-31 21:29:56.115095611 +0000 UTC Remote: 2024-07-31 21:29:56.027049922 +0000 UTC m=+369.298206393 (delta=88.045689ms)
	I0731 21:29:56.149267 1146656 fix.go:200] guest clock delta is within tolerance: 88.045689ms
	I0731 21:29:56.149294 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 18.13224564s
	I0731 21:29:56.149320 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.149597 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:56.152941 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153307 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.153359 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153492 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154130 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154353 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154450 1146656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:56.154497 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.154650 1146656 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:56.154678 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.157376 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157795 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157838 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.157858 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158006 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158227 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158396 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.158422 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.158421 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158568 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158646 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.158731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158879 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.159051 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.241170 1146656 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:56.259519 1146656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:56.414823 1146656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:56.420732 1146656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:56.420805 1146656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:56.438423 1146656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:56.438461 1146656 start.go:495] detecting cgroup driver to use...
	I0731 21:29:56.438567 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:56.456069 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:56.471320 1146656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:56.471399 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:56.486206 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:56.501601 1146656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:56.623367 1146656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:56.774879 1146656 docker.go:233] disabling docker service ...
	I0731 21:29:56.774969 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:56.792295 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:56.809957 1146656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:56.961634 1146656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:57.102957 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:57.118907 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:57.139231 1146656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:29:57.139301 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.150471 1146656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:57.150547 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.160951 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.171556 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.182777 1146656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:57.196310 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.209689 1146656 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.227660 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.238058 1146656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:57.248326 1146656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:57.248388 1146656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:57.261076 1146656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:57.272002 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:57.406445 1146656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:57.540657 1146656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:57.540765 1146656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:57.546161 1146656 start.go:563] Will wait 60s for crictl version
	I0731 21:29:57.546233 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.550021 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:57.589152 1146656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:57.589272 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.618944 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.650646 1146656 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:29:54.766019 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:57.264179 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:59.264724 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:55.101321 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:55.600950 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.100785 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.601322 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.101431 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.101425 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.600958 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.100876 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.601349 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.837038 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.336837 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.836595 1148013 node_ready.go:49] node "default-k8s-diff-port-755535" has status "Ready":"True"
	I0731 21:30:00.836632 1148013 node_ready.go:38] duration metric: took 7.504184626s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:30:00.836644 1148013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:00.841523 1148013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846346 1148013 pod_ready.go:92] pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.846372 1148013 pod_ready.go:81] duration metric: took 4.815855ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846383 1148013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851118 1148013 pod_ready.go:92] pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.851140 1148013 pod_ready.go:81] duration metric: took 4.751019ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851151 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:57.651874 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:57.655070 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655529 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:57.655572 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655778 1146656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:57.659917 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:57.673863 1146656 kubeadm.go:883] updating cluster {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:57.674037 1146656 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:29:57.674099 1146656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:57.714187 1146656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:29:57.714225 1146656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:57.714285 1146656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.714317 1146656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.714345 1146656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.714370 1146656 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.714378 1146656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.714348 1146656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.714420 1146656 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.714458 1146656 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716109 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.716123 1146656 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.716147 1146656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.716161 1146656 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716168 1146656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.716119 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.716527 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.716549 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.848967 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.869777 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.881111 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 21:29:57.888022 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.892714 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.893611 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.908421 1146656 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 21:29:57.908493 1146656 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.908554 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.914040 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.985691 1146656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 21:29:57.985757 1146656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.985814 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.128813 1146656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 21:29:58.128930 1146656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.128947 1146656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 21:29:58.128996 1146656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.129046 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129061 1146656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 21:29:58.129088 1146656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.129115 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129000 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129194 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:58.129262 1146656 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 21:29:58.129309 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:58.129312 1146656 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.129389 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.141411 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.141477 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.212758 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 21:29:58.212783 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212847 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.212860 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.212928 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212933 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:29:58.226942 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227020 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.227057 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227113 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.265352 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 21:29:58.265470 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:29:58.276064 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276115 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276128 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 21:29:58.276150 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276176 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276186 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276213 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 21:29:58.276248 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.276359 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.280583 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 21:29:58.363934 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.050742 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.774531298s)
	I0731 21:30:01.050793 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 21:30:01.050832 1146656 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050926 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050839 1146656 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.686857972s)
	I0731 21:30:01.051031 1146656 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 21:30:01.051073 1146656 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.051118 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:30:01.266241 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:03.764462 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:00.101336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:00.601036 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.101381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.601371 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.100649 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.601354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.101316 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.101099 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.601146 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.860276 1148013 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:04.360452 1148013 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.360479 1148013 pod_ready.go:81] duration metric: took 3.509320908s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.360496 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367733 1148013 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.367757 1148013 pod_ready.go:81] duration metric: took 7.253266ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367768 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372693 1148013 pod_ready.go:92] pod "kube-proxy-mqcmt" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.372719 1148013 pod_ready.go:81] duration metric: took 4.944626ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372728 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436318 1148013 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.436345 1148013 pod_ready.go:81] duration metric: took 63.609569ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436356 1148013 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.339084 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.288125508s)
	I0731 21:30:04.339126 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 21:30:04.339141 1146656 ssh_runner.go:235] Completed: which crictl: (3.288000381s)
	I0731 21:30:04.339164 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:04.339223 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:04.339234 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:06.225796 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886536121s)
	I0731 21:30:06.225852 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 21:30:06.225875 1146656 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.886627424s)
	I0731 21:30:06.225900 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.225933 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 21:30:06.225987 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.226038 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:05.764555 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:07.766002 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:05.100624 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:05.600680 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.101286 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.601308 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.100801 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.600703 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.101252 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.601341 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.101049 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.601284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.443235 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.444797 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.950200 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.198750 1146656 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972673111s)
	I0731 21:30:08.198802 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 21:30:08.198831 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.972821334s)
	I0731 21:30:08.198850 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 21:30:08.198878 1146656 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:08.198956 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:10.054141 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.855149734s)
	I0731 21:30:10.054181 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 21:30:10.054209 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:10.054263 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:11.506212 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.45191421s)
	I0731 21:30:11.506252 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 21:30:11.506294 1146656 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:11.506390 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:10.263896 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.264903 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:14.265574 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.100825 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:10.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.101377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.601357 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.100679 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.600724 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.101278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.600992 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.101359 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.601364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.443063 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:15.443624 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.356725 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 21:30:12.356768 1146656 cache_images.go:123] Successfully loaded all cached images
	I0731 21:30:12.356773 1146656 cache_images.go:92] duration metric: took 14.642536081s to LoadCachedImages
	I0731 21:30:12.356786 1146656 kubeadm.go:934] updating node { 192.168.61.246 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:30:12.356931 1146656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-018891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:30:12.357036 1146656 ssh_runner.go:195] Run: crio config
	I0731 21:30:12.404684 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:12.404711 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:12.404728 1146656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:30:12.404752 1146656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.246 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018891 NodeName:no-preload-018891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:30:12.404917 1146656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018891"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:30:12.404999 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:30:12.416421 1146656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:30:12.416516 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:30:12.426572 1146656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 21:30:12.444613 1146656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:30:12.461161 1146656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 21:30:12.478872 1146656 ssh_runner.go:195] Run: grep 192.168.61.246	control-plane.minikube.internal$ /etc/hosts
	I0731 21:30:12.482736 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:30:12.502603 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:12.617670 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:12.634477 1146656 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891 for IP: 192.168.61.246
	I0731 21:30:12.634508 1146656 certs.go:194] generating shared ca certs ...
	I0731 21:30:12.634532 1146656 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:12.634740 1146656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:30:12.634799 1146656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:30:12.634813 1146656 certs.go:256] generating profile certs ...
	I0731 21:30:12.634961 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.key
	I0731 21:30:12.635052 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key.54e88c10
	I0731 21:30:12.635108 1146656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key
	I0731 21:30:12.635312 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:30:12.635379 1146656 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:30:12.635394 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:30:12.635433 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:30:12.635465 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:30:12.635500 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:30:12.635557 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:30:12.636406 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:30:12.672156 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:30:12.702346 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:30:12.731602 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:30:12.777601 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:30:12.813409 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:30:12.841076 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:30:12.866418 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:30:12.890716 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:30:12.915792 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:30:12.940826 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:30:12.966374 1146656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:30:12.984533 1146656 ssh_runner.go:195] Run: openssl version
	I0731 21:30:12.990538 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:30:13.002053 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006781 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006862 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.012728 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:30:13.024167 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:30:13.035617 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040041 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040150 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.046193 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:30:13.058141 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:30:13.070085 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074720 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074811 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.080498 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:30:13.092497 1146656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:30:13.097275 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:30:13.103762 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:30:13.110267 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:30:13.118325 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:30:13.124784 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:30:13.131502 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:30:13.138736 1146656 kubeadm.go:392] StartCluster: {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:30:13.138837 1146656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:30:13.138888 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.178222 1146656 cri.go:89] found id: ""
	I0731 21:30:13.178304 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:30:13.188552 1146656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:30:13.188580 1146656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:30:13.188634 1146656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:30:13.198424 1146656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:30:13.199620 1146656 kubeconfig.go:125] found "no-preload-018891" server: "https://192.168.61.246:8443"
	I0731 21:30:13.202067 1146656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:30:13.213244 1146656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.246
	I0731 21:30:13.213286 1146656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:30:13.213303 1146656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:30:13.213719 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.253396 1146656 cri.go:89] found id: ""
	I0731 21:30:13.253478 1146656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:30:13.270269 1146656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:30:13.280405 1146656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:30:13.280431 1146656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:30:13.280479 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:30:13.289979 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:30:13.290047 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:30:13.299871 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:30:13.309257 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:30:13.309342 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:30:13.319593 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.329418 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:30:13.329486 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.339419 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:30:13.348971 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:30:13.349036 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:30:13.358887 1146656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:30:13.368643 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:13.485786 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.401198 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.599529 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.677307 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.765353 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:30:14.765468 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.266329 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.766054 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.786157 1146656 api_server.go:72] duration metric: took 1.020803565s to wait for apiserver process to appear ...
	I0731 21:30:15.786189 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:30:15.786217 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:16.265710 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.766148 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.439856 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.439896 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.439914 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.492649 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.492690 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.787081 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.810263 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:18.810302 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.286734 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.291964 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:19.291999 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.786505 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.796699 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:30:19.807525 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:30:19.807566 1146656 api_server.go:131] duration metric: took 4.02136792s to wait for apiserver health ...
	I0731 21:30:19.807579 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:19.807588 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:19.809353 1146656 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:30:15.101218 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.600733 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.101137 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.601585 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.101343 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.101295 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.601307 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.100682 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.601155 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.942857 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.943771 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.810433 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:30:19.821002 1146656 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:30:19.868402 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:30:19.883129 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:30:19.883180 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:30:19.883192 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:30:19.883204 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:30:19.883212 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:30:19.883221 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:30:19.883229 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:30:19.883242 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:30:19.883261 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:30:19.883274 1146656 system_pods.go:74] duration metric: took 14.843323ms to wait for pod list to return data ...
	I0731 21:30:19.883284 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:30:19.897327 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:30:19.897368 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:30:19.897382 1146656 node_conditions.go:105] duration metric: took 14.091172ms to run NodePressure ...
	I0731 21:30:19.897407 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:20.196896 1146656 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:30:20.202966 1146656 kubeadm.go:739] kubelet initialised
	I0731 21:30:20.202990 1146656 kubeadm.go:740] duration metric: took 6.059782ms waiting for restarted kubelet to initialise ...
	I0731 21:30:20.203000 1146656 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:20.208123 1146656 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.214186 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214236 1146656 pod_ready.go:81] duration metric: took 6.07909ms for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.214247 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214253 1146656 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.220223 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220256 1146656 pod_ready.go:81] duration metric: took 5.988701ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.220267 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220273 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.228507 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228536 1146656 pod_ready.go:81] duration metric: took 8.255655ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.228545 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228553 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.272704 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272743 1146656 pod_ready.go:81] duration metric: took 44.182664ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.272755 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272777 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.673129 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673158 1146656 pod_ready.go:81] duration metric: took 400.361902ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.673170 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673177 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.072429 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072460 1146656 pod_ready.go:81] duration metric: took 399.27644ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.072471 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072478 1146656 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.472593 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472626 1146656 pod_ready.go:81] duration metric: took 400.13982ms for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.472637 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472645 1146656 pod_ready.go:38] duration metric: took 1.26963694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:21.472664 1146656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:30:21.484323 1146656 ops.go:34] apiserver oom_adj: -16
	I0731 21:30:21.484351 1146656 kubeadm.go:597] duration metric: took 8.295763074s to restartPrimaryControlPlane
	I0731 21:30:21.484361 1146656 kubeadm.go:394] duration metric: took 8.34563439s to StartCluster
	I0731 21:30:21.484379 1146656 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.484460 1146656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:30:21.486137 1146656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.486409 1146656 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:30:21.486485 1146656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:30:21.486584 1146656 addons.go:69] Setting storage-provisioner=true in profile "no-preload-018891"
	I0731 21:30:21.486615 1146656 addons.go:234] Setting addon storage-provisioner=true in "no-preload-018891"
	I0731 21:30:21.486646 1146656 addons.go:69] Setting metrics-server=true in profile "no-preload-018891"
	I0731 21:30:21.486692 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:30:21.486707 1146656 addons.go:234] Setting addon metrics-server=true in "no-preload-018891"
	W0731 21:30:21.486718 1146656 addons.go:243] addon metrics-server should already be in state true
	I0731 21:30:21.486759 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	W0731 21:30:21.486664 1146656 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:30:21.486850 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.486615 1146656 addons.go:69] Setting default-storageclass=true in profile "no-preload-018891"
	I0731 21:30:21.486954 1146656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018891"
	I0731 21:30:21.487107 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487150 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487230 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487267 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487371 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487406 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.488066 1146656 out.go:177] * Verifying Kubernetes components...
	I0731 21:30:21.489491 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:21.503876 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0731 21:30:21.504017 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0731 21:30:21.504086 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0731 21:30:21.504598 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504642 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504682 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.505173 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505193 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505199 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505213 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505305 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505327 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505554 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505629 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505639 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505831 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.506154 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506164 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.508914 1146656 addons.go:234] Setting addon default-storageclass=true in "no-preload-018891"
	W0731 21:30:21.508932 1146656 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:30:21.508957 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.509187 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.509213 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.526066 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0731 21:30:21.528731 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.529285 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.529311 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.529784 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.530000 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.532450 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.534700 1146656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:21.536115 1146656 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.536141 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:30:21.536170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.540044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540592 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.540622 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540851 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.541104 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.541270 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.541425 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.547128 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0731 21:30:21.547184 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0731 21:30:21.547786 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.547865 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.548426 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548445 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548429 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548466 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548780 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548845 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548959 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.549425 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.549473 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.551116 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.553068 1146656 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:30:21.554401 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:30:21.554418 1146656 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:30:21.554445 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.557987 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558385 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.558410 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558728 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.558976 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.559164 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.559326 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.569320 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0731 21:30:21.569956 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.570511 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.570534 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.571119 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.571339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.573316 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.573563 1146656 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.573585 1146656 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:30:21.573604 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.576643 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577012 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.577044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577214 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.577511 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.577688 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.577849 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.700050 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:21.717247 1146656 node_ready.go:35] waiting up to 6m0s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:21.798175 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.818043 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:30:21.818078 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:30:21.823805 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.862781 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:30:21.862812 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:30:21.898427 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:21.898457 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:30:21.948766 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:23.027256 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.229032744s)
	I0731 21:30:23.027318 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027322 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203467073s)
	I0731 21:30:23.027367 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027401 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.078593532s)
	I0731 21:30:23.027335 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027442 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027459 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027708 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027714 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027723 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027728 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027732 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027738 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027740 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027746 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027794 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027808 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027818 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.027827 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027991 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028003 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028037 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028056 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028061 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028071 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028081 1146656 addons.go:475] Verifying addon metrics-server=true in "no-preload-018891"
	I0731 21:30:23.028084 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.028119 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.034930 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.034965 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.035312 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.035333 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.035346 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.037042 1146656 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0731 21:30:21.264247 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.264691 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:20.100856 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:20.601336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.101059 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.100791 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.601360 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.600731 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.601285 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.945141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:24.442664 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.038375 1146656 addons.go:510] duration metric: took 1.551892195s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0731 21:30:23.721386 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.721450 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.264972 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:27.266151 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:25.101043 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:25.601045 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.101312 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.600559 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:27.100884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:27.100987 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:27.138104 1147424 cri.go:89] found id: ""
	I0731 21:30:27.138142 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.138154 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:27.138163 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:27.138233 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:27.175030 1147424 cri.go:89] found id: ""
	I0731 21:30:27.175068 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.175080 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:27.175088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:27.175158 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:27.209891 1147424 cri.go:89] found id: ""
	I0731 21:30:27.209925 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.209934 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:27.209941 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:27.209992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:27.247117 1147424 cri.go:89] found id: ""
	I0731 21:30:27.247154 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.247163 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:27.247170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:27.247236 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:27.286595 1147424 cri.go:89] found id: ""
	I0731 21:30:27.286625 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.286633 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:27.286639 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:27.286695 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:27.321169 1147424 cri.go:89] found id: ""
	I0731 21:30:27.321201 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.321218 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:27.321226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:27.321310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:27.356278 1147424 cri.go:89] found id: ""
	I0731 21:30:27.356306 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.356317 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:27.356323 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:27.356386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:27.390351 1147424 cri.go:89] found id: ""
	I0731 21:30:27.390378 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.390387 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:27.390398 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:27.390412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:27.440412 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:27.440451 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:27.454295 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:27.454330 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:27.575971 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:27.575999 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:27.576018 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:27.639090 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:27.639141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:26.442847 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.943311 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.221333 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:29.221116 1146656 node_ready.go:49] node "no-preload-018891" has status "Ready":"True"
	I0731 21:30:29.221150 1146656 node_ready.go:38] duration metric: took 7.50385465s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:29.221161 1146656 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:29.226655 1146656 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:31.233713 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:29.764835 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:31.764914 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:34.264305 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:30.177467 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:30.191103 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:30.191179 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:30.226529 1147424 cri.go:89] found id: ""
	I0731 21:30:30.226575 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.226584 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:30.226591 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:30.226653 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:30.262162 1147424 cri.go:89] found id: ""
	I0731 21:30:30.262193 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.262202 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:30.262209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:30.262275 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:30.301663 1147424 cri.go:89] found id: ""
	I0731 21:30:30.301698 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.301706 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:30.301713 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:30.301769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:30.342073 1147424 cri.go:89] found id: ""
	I0731 21:30:30.342105 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.342117 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:30.342125 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:30.342199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:30.375980 1147424 cri.go:89] found id: ""
	I0731 21:30:30.376013 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.376024 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:30.376033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:30.376114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:30.409852 1147424 cri.go:89] found id: ""
	I0731 21:30:30.409892 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.409900 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:30.409907 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:30.409960 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:30.444551 1147424 cri.go:89] found id: ""
	I0731 21:30:30.444592 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.444604 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:30.444612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:30.444672 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:30.481953 1147424 cri.go:89] found id: ""
	I0731 21:30:30.481987 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.481995 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:30.482006 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:30.482024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:30.533740 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:30.533785 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:30.546789 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:30.546831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:30.622294 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:30.622321 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:30.622338 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:30.693871 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:30.693922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.236318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:33.249452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:33.249545 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:33.288064 1147424 cri.go:89] found id: ""
	I0731 21:30:33.288110 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.288124 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:33.288133 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:33.288208 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:33.321269 1147424 cri.go:89] found id: ""
	I0731 21:30:33.321298 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.321307 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:33.321313 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:33.321368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:33.357078 1147424 cri.go:89] found id: ""
	I0731 21:30:33.357125 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.357133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:33.357140 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:33.357206 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:33.393556 1147424 cri.go:89] found id: ""
	I0731 21:30:33.393587 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.393598 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:33.393608 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:33.393674 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:33.427311 1147424 cri.go:89] found id: ""
	I0731 21:30:33.427347 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.427359 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:33.427368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:33.427438 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:33.462424 1147424 cri.go:89] found id: ""
	I0731 21:30:33.462463 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.462474 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:33.462482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:33.462557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:33.499271 1147424 cri.go:89] found id: ""
	I0731 21:30:33.499302 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.499311 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:33.499320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:33.499395 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:33.536341 1147424 cri.go:89] found id: ""
	I0731 21:30:33.536372 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.536382 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:33.536392 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:33.536406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:33.606582 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:33.606621 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:33.606640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:33.682704 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:33.682757 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.722410 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:33.722456 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:33.778845 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:33.778888 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:31.442470 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.443996 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:35.944317 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.735206 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.234503 1146656 pod_ready.go:92] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.234535 1146656 pod_ready.go:81] duration metric: took 7.007846047s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.234557 1146656 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240361 1146656 pod_ready.go:92] pod "etcd-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.240396 1146656 pod_ready.go:81] duration metric: took 5.830601ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240410 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246667 1146656 pod_ready.go:92] pod "kube-apiserver-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.246697 1146656 pod_ready.go:81] duration metric: took 6.278754ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246707 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252616 1146656 pod_ready.go:92] pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.252646 1146656 pod_ready.go:81] duration metric: took 5.931893ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252657 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257929 1146656 pod_ready.go:92] pod "kube-proxy-x2dnn" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.257962 1146656 pod_ready.go:81] duration metric: took 5.298921ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257976 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632686 1146656 pod_ready.go:92] pod "kube-scheduler-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.632723 1146656 pod_ready.go:81] duration metric: took 374.739035ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632737 1146656 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.265196 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.265807 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.293569 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:36.311120 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:36.311235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:36.350558 1147424 cri.go:89] found id: ""
	I0731 21:30:36.350589 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.350596 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:36.350602 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:36.350655 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:36.387804 1147424 cri.go:89] found id: ""
	I0731 21:30:36.387841 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.387849 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:36.387855 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:36.387912 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:36.427225 1147424 cri.go:89] found id: ""
	I0731 21:30:36.427263 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.427273 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:36.427280 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:36.427367 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:36.470864 1147424 cri.go:89] found id: ""
	I0731 21:30:36.470896 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.470908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:36.470917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:36.470985 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:36.523075 1147424 cri.go:89] found id: ""
	I0731 21:30:36.523109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.523117 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:36.523124 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:36.523188 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:36.598071 1147424 cri.go:89] found id: ""
	I0731 21:30:36.598109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.598120 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:36.598129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:36.598200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:36.638277 1147424 cri.go:89] found id: ""
	I0731 21:30:36.638314 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.638326 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:36.638335 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:36.638402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:36.673112 1147424 cri.go:89] found id: ""
	I0731 21:30:36.673152 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.673164 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:36.673180 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:36.673197 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:36.728197 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:36.728245 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:36.742034 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:36.742072 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:36.815584 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:36.815617 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:36.815635 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:36.894418 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:36.894464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.436637 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:39.449708 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:39.449823 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:39.490244 1147424 cri.go:89] found id: ""
	I0731 21:30:39.490281 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.490293 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:39.490301 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:39.490365 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:39.523568 1147424 cri.go:89] found id: ""
	I0731 21:30:39.523601 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.523625 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:39.523640 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:39.523723 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:39.558966 1147424 cri.go:89] found id: ""
	I0731 21:30:39.559004 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.559017 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:39.559025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:39.559092 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:39.592002 1147424 cri.go:89] found id: ""
	I0731 21:30:39.592037 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.592049 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:39.592058 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:39.592145 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:39.624596 1147424 cri.go:89] found id: ""
	I0731 21:30:39.624634 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.624646 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:39.624655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:39.624722 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:39.658928 1147424 cri.go:89] found id: ""
	I0731 21:30:39.658957 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.658965 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:39.658973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:39.659024 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:39.692725 1147424 cri.go:89] found id: ""
	I0731 21:30:39.692766 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.692779 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:39.692788 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:39.692857 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:39.728770 1147424 cri.go:89] found id: ""
	I0731 21:30:39.728811 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.728823 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:39.728837 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:39.728854 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:39.799162 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:39.799193 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:39.799213 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:38.443560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.942937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.638956 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.640407 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.764748 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.765335 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:39.884581 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:39.884625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.923650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:39.923687 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:39.977735 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:39.977787 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.491668 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:42.513530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:42.513623 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:42.563932 1147424 cri.go:89] found id: ""
	I0731 21:30:42.563968 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.563982 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:42.563991 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:42.564067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:42.598089 1147424 cri.go:89] found id: ""
	I0731 21:30:42.598122 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.598131 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:42.598138 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:42.598199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:42.631493 1147424 cri.go:89] found id: ""
	I0731 21:30:42.631528 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.631540 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:42.631549 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:42.631626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:42.668358 1147424 cri.go:89] found id: ""
	I0731 21:30:42.668395 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.668408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:42.668416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:42.668484 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:42.701115 1147424 cri.go:89] found id: ""
	I0731 21:30:42.701150 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.701161 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:42.701170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:42.701248 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:42.736626 1147424 cri.go:89] found id: ""
	I0731 21:30:42.736665 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.736678 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:42.736687 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:42.736759 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:42.769864 1147424 cri.go:89] found id: ""
	I0731 21:30:42.769897 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.769904 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:42.769910 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:42.769964 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:42.803441 1147424 cri.go:89] found id: ""
	I0731 21:30:42.803477 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.803486 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:42.803497 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:42.803514 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.817556 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:42.817591 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:42.885011 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:42.885040 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:42.885055 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:42.964799 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:42.964851 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:43.015621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:43.015675 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:42.942984 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.943126 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.641436 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.139036 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.766405 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:46.766520 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.265061 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.568268 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:45.580867 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:45.580952 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:45.614028 1147424 cri.go:89] found id: ""
	I0731 21:30:45.614066 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.614076 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:45.614082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:45.614152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:45.650207 1147424 cri.go:89] found id: ""
	I0731 21:30:45.650235 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.650245 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:45.650254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:45.650321 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:45.684405 1147424 cri.go:89] found id: ""
	I0731 21:30:45.684433 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.684444 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:45.684452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:45.684540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:45.718355 1147424 cri.go:89] found id: ""
	I0731 21:30:45.718397 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.718408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:45.718416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:45.718501 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:45.755484 1147424 cri.go:89] found id: ""
	I0731 21:30:45.755532 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.755554 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:45.755563 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:45.755638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:45.791243 1147424 cri.go:89] found id: ""
	I0731 21:30:45.791277 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.791290 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:45.791298 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:45.791368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:45.827118 1147424 cri.go:89] found id: ""
	I0731 21:30:45.827157 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.827169 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:45.827177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:45.827244 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:45.866131 1147424 cri.go:89] found id: ""
	I0731 21:30:45.866166 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.866177 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:45.866191 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:45.866207 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:45.919945 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:45.919988 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:45.935650 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:45.935685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:46.008387 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:46.008417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:46.008437 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:46.087063 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:46.087119 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:48.626079 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:48.639423 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:48.639502 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:48.673340 1147424 cri.go:89] found id: ""
	I0731 21:30:48.673371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.673380 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:48.673388 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:48.673457 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:48.707662 1147424 cri.go:89] found id: ""
	I0731 21:30:48.707694 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.707704 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:48.707712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:48.707786 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:48.741679 1147424 cri.go:89] found id: ""
	I0731 21:30:48.741716 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.741728 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:48.741736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:48.741807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:48.780939 1147424 cri.go:89] found id: ""
	I0731 21:30:48.780969 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.780980 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:48.780987 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:48.781050 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:48.818882 1147424 cri.go:89] found id: ""
	I0731 21:30:48.818912 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.818920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:48.818927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:48.818982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:48.858012 1147424 cri.go:89] found id: ""
	I0731 21:30:48.858044 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.858056 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:48.858065 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:48.858140 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:48.894753 1147424 cri.go:89] found id: ""
	I0731 21:30:48.894787 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.894795 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:48.894802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:48.894863 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:48.927020 1147424 cri.go:89] found id: ""
	I0731 21:30:48.927056 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.927066 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:48.927078 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:48.927099 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:48.983634 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:48.983678 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:48.998249 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:48.998280 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:49.068981 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:49.069006 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:49.069024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:49.154613 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:49.154658 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:46.943398 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:48.953937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:47.139335 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.139858 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.139967 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.764837 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.265088 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.693023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:51.706145 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:51.706246 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:51.737003 1147424 cri.go:89] found id: ""
	I0731 21:30:51.737032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.737041 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:51.737046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:51.737114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:51.772405 1147424 cri.go:89] found id: ""
	I0731 21:30:51.772441 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.772452 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:51.772461 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:51.772518 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.805868 1147424 cri.go:89] found id: ""
	I0731 21:30:51.805900 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.805910 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:51.805918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:51.805986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:51.841996 1147424 cri.go:89] found id: ""
	I0731 21:30:51.842032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.842045 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:51.842054 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:51.842130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:51.874698 1147424 cri.go:89] found id: ""
	I0731 21:30:51.874734 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.874746 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:51.874755 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:51.874824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:51.908924 1147424 cri.go:89] found id: ""
	I0731 21:30:51.908955 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.908967 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:51.908973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:51.909037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:51.945056 1147424 cri.go:89] found id: ""
	I0731 21:30:51.945085 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.945096 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:51.945104 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:51.945167 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:51.979480 1147424 cri.go:89] found id: ""
	I0731 21:30:51.979513 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.979538 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:51.979552 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:51.979571 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:52.055960 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:52.055992 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:52.056009 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:52.132988 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:52.133039 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:52.172054 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:52.172098 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:52.226311 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:52.226355 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:54.741919 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:54.755241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:54.755319 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:54.789532 1147424 cri.go:89] found id: ""
	I0731 21:30:54.789563 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.789574 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:54.789583 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:54.789652 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:54.824196 1147424 cri.go:89] found id: ""
	I0731 21:30:54.824229 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.824240 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:54.824248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:54.824314 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.443199 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.944480 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.140181 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:55.144767 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:56.265184 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.765513 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.860579 1147424 cri.go:89] found id: ""
	I0731 21:30:54.860611 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.860620 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:54.860627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:54.860679 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:54.897438 1147424 cri.go:89] found id: ""
	I0731 21:30:54.897472 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.897484 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:54.897493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:54.897569 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:54.935283 1147424 cri.go:89] found id: ""
	I0731 21:30:54.935318 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.935330 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:54.935339 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:54.935409 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:54.970819 1147424 cri.go:89] found id: ""
	I0731 21:30:54.970850 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.970858 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:54.970865 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:54.970916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:55.004983 1147424 cri.go:89] found id: ""
	I0731 21:30:55.005019 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.005029 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:55.005038 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:55.005111 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:55.040711 1147424 cri.go:89] found id: ""
	I0731 21:30:55.040740 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.040749 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:55.040760 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:55.040774 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:55.117255 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:55.117290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:55.117308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:55.195423 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:55.195466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:55.234017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:55.234050 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:55.287518 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:55.287562 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:57.802888 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:57.816049 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:57.816152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:57.849582 1147424 cri.go:89] found id: ""
	I0731 21:30:57.849616 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.849627 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:57.849635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:57.849713 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:57.883334 1147424 cri.go:89] found id: ""
	I0731 21:30:57.883371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.883382 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:57.883391 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:57.883459 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:57.917988 1147424 cri.go:89] found id: ""
	I0731 21:30:57.918018 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.918028 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:57.918034 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:57.918095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:57.956169 1147424 cri.go:89] found id: ""
	I0731 21:30:57.956205 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.956217 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:57.956229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:57.956296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:57.992259 1147424 cri.go:89] found id: ""
	I0731 21:30:57.992291 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.992301 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:57.992308 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:57.992371 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:58.027969 1147424 cri.go:89] found id: ""
	I0731 21:30:58.027996 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.028006 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:58.028013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:58.028065 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:58.063018 1147424 cri.go:89] found id: ""
	I0731 21:30:58.063048 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.063057 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:58.063064 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:58.063117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:58.097096 1147424 cri.go:89] found id: ""
	I0731 21:30:58.097131 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.097143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:58.097158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:58.097175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:58.137311 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:58.137341 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:58.186533 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:58.186575 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:58.200436 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:58.200469 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:58.270006 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:58.270033 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:58.270053 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:56.444446 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.942906 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.943227 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:57.639057 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.140108 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:01.265139 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:03.266080 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.855423 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:00.868032 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:00.868128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:00.901453 1147424 cri.go:89] found id: ""
	I0731 21:31:00.901486 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.901498 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:00.901506 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:00.901586 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:00.940566 1147424 cri.go:89] found id: ""
	I0731 21:31:00.940598 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.940614 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:00.940623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:00.940693 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:00.975729 1147424 cri.go:89] found id: ""
	I0731 21:31:00.975767 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.975778 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:00.975785 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:00.975852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:01.010713 1147424 cri.go:89] found id: ""
	I0731 21:31:01.010747 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.010759 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:01.010768 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:01.010842 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:01.044675 1147424 cri.go:89] found id: ""
	I0731 21:31:01.044709 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.044718 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:01.044725 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:01.044785 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:01.078574 1147424 cri.go:89] found id: ""
	I0731 21:31:01.078614 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.078625 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:01.078634 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:01.078696 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:01.116013 1147424 cri.go:89] found id: ""
	I0731 21:31:01.116051 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.116062 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:01.116071 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:01.116161 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:01.152596 1147424 cri.go:89] found id: ""
	I0731 21:31:01.152631 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.152640 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:01.152650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:01.152666 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:01.203674 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:01.203726 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:01.218212 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:01.218261 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:01.290579 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:01.290604 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:01.290621 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:01.369885 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:01.369929 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:03.910280 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:03.923195 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:03.923276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:03.958378 1147424 cri.go:89] found id: ""
	I0731 21:31:03.958411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.958420 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:03.958427 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:03.958496 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:03.993096 1147424 cri.go:89] found id: ""
	I0731 21:31:03.993128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.993139 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:03.993148 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:03.993219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:04.029519 1147424 cri.go:89] found id: ""
	I0731 21:31:04.029552 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.029561 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:04.029569 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:04.029625 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:04.065597 1147424 cri.go:89] found id: ""
	I0731 21:31:04.065633 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.065643 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:04.065652 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:04.065719 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:04.101708 1147424 cri.go:89] found id: ""
	I0731 21:31:04.101744 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.101755 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:04.101763 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:04.101835 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:04.137732 1147424 cri.go:89] found id: ""
	I0731 21:31:04.137773 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.137783 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:04.137792 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:04.137866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:04.173141 1147424 cri.go:89] found id: ""
	I0731 21:31:04.173173 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.173188 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:04.173197 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:04.173269 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:04.208707 1147424 cri.go:89] found id: ""
	I0731 21:31:04.208742 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.208753 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:04.208770 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:04.208789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:04.279384 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:04.279417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:04.279498 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:04.362158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:04.362203 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:04.401372 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:04.401412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:04.453988 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:04.454047 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:03.443745 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.942529 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:02.639283 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:04.639372 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.765358 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:08.265854 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:06.968373 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:06.982182 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:06.982268 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:07.018082 1147424 cri.go:89] found id: ""
	I0731 21:31:07.018112 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.018122 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:07.018129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:07.018197 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:07.050272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.050309 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.050319 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:07.050325 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:07.050392 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:07.085174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.085206 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.085215 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:07.085221 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:07.085285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:07.119239 1147424 cri.go:89] found id: ""
	I0731 21:31:07.119274 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.119282 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:07.119289 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:07.119353 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:07.156846 1147424 cri.go:89] found id: ""
	I0731 21:31:07.156876 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.156883 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:07.156889 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:07.156942 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:07.191272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.191305 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.191314 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:07.191320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:07.191384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:07.231174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.231209 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.231221 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:07.231231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:07.231295 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:07.266525 1147424 cri.go:89] found id: ""
	I0731 21:31:07.266551 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.266558 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:07.266567 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:07.266589 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:07.306626 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:07.306659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:07.360568 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:07.360625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:07.374630 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:07.374665 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:07.444054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:07.444081 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:07.444118 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:07.942657 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.943080 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:07.140848 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.639749 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.266538 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.268527 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.030591 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:10.043498 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:10.043571 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:10.076835 1147424 cri.go:89] found id: ""
	I0731 21:31:10.076875 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.076887 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:10.076897 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:10.076966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:10.111342 1147424 cri.go:89] found id: ""
	I0731 21:31:10.111384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.111396 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:10.111404 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:10.111473 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:10.146858 1147424 cri.go:89] found id: ""
	I0731 21:31:10.146896 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.146911 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:10.146920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:10.146989 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:10.180682 1147424 cri.go:89] found id: ""
	I0731 21:31:10.180717 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.180729 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:10.180738 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:10.180804 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:10.215147 1147424 cri.go:89] found id: ""
	I0731 21:31:10.215177 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.215186 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:10.215192 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:10.215249 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:10.248291 1147424 cri.go:89] found id: ""
	I0731 21:31:10.248327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.248336 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:10.248343 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:10.248398 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:10.284207 1147424 cri.go:89] found id: ""
	I0731 21:31:10.284241 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.284252 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:10.284259 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:10.284325 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:10.318286 1147424 cri.go:89] found id: ""
	I0731 21:31:10.318322 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.318331 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:10.318342 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:10.318356 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:10.368429 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:10.368476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:10.383638 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:10.383673 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:10.450696 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:10.450720 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:10.450742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:10.530413 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:10.530458 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.084947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:13.098074 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:13.098156 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:13.132915 1147424 cri.go:89] found id: ""
	I0731 21:31:13.132952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.132962 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:13.132968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:13.133037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:13.173568 1147424 cri.go:89] found id: ""
	I0731 21:31:13.173597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.173605 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:13.173612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:13.173668 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:13.207356 1147424 cri.go:89] found id: ""
	I0731 21:31:13.207388 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.207402 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:13.207411 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:13.207478 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:13.243452 1147424 cri.go:89] found id: ""
	I0731 21:31:13.243482 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.243493 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:13.243502 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:13.243587 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:13.278682 1147424 cri.go:89] found id: ""
	I0731 21:31:13.278719 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.278729 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:13.278736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:13.278794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:13.312698 1147424 cri.go:89] found id: ""
	I0731 21:31:13.312727 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.312735 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:13.312742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:13.312796 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:13.346223 1147424 cri.go:89] found id: ""
	I0731 21:31:13.346259 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.346270 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:13.346279 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:13.346350 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:13.380778 1147424 cri.go:89] found id: ""
	I0731 21:31:13.380819 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.380833 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:13.380847 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:13.380889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:13.394337 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:13.394372 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:13.472260 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:13.472290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:13.472308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:13.549561 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:13.549608 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.589373 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:13.589416 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:11.943150 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.443284 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.140029 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.641142 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.765639 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.265180 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.265765 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:16.143472 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:16.155966 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:16.156039 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:16.194187 1147424 cri.go:89] found id: ""
	I0731 21:31:16.194216 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.194224 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:16.194231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:16.194299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:16.228700 1147424 cri.go:89] found id: ""
	I0731 21:31:16.228738 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.228751 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:16.228760 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:16.228844 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:16.261597 1147424 cri.go:89] found id: ""
	I0731 21:31:16.261629 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.261640 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:16.261647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:16.261716 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:16.299664 1147424 cri.go:89] found id: ""
	I0731 21:31:16.299697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.299709 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:16.299718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:16.299780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:16.350144 1147424 cri.go:89] found id: ""
	I0731 21:31:16.350172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.350181 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:16.350188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:16.350254 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:16.385259 1147424 cri.go:89] found id: ""
	I0731 21:31:16.385294 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.385303 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:16.385310 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:16.385364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:16.419555 1147424 cri.go:89] found id: ""
	I0731 21:31:16.419597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.419610 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:16.419619 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:16.419714 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:16.455956 1147424 cri.go:89] found id: ""
	I0731 21:31:16.455993 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.456005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:16.456029 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:16.456048 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:16.493234 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:16.493269 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:16.544931 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:16.544975 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:16.559513 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:16.559553 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:16.625127 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.625158 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:16.625176 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.200306 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:19.213303 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:19.213393 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:19.247139 1147424 cri.go:89] found id: ""
	I0731 21:31:19.247171 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.247179 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:19.247186 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:19.247245 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:19.282630 1147424 cri.go:89] found id: ""
	I0731 21:31:19.282659 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.282668 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:19.282674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:19.282740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:19.317287 1147424 cri.go:89] found id: ""
	I0731 21:31:19.317327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.317338 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:19.317345 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:19.317410 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:19.352680 1147424 cri.go:89] found id: ""
	I0731 21:31:19.352718 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.352738 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:19.352747 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:19.352820 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:19.385653 1147424 cri.go:89] found id: ""
	I0731 21:31:19.385697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.385709 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:19.385718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:19.385794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:19.425552 1147424 cri.go:89] found id: ""
	I0731 21:31:19.425582 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.425591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:19.425598 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:19.425654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:19.461717 1147424 cri.go:89] found id: ""
	I0731 21:31:19.461753 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.461766 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:19.461775 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:19.461852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:19.497504 1147424 cri.go:89] found id: ""
	I0731 21:31:19.497542 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.497554 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:19.497567 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:19.497592 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.571818 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:19.571867 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:19.611053 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:19.611091 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:19.662174 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:19.662220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:19.676489 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:19.676526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:19.750718 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.943653 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.443833 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.140073 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.639048 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.639213 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.764897 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.765013 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:22.251175 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:22.265094 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:22.265186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:22.298628 1147424 cri.go:89] found id: ""
	I0731 21:31:22.298665 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.298676 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:22.298684 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:22.298754 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:22.336851 1147424 cri.go:89] found id: ""
	I0731 21:31:22.336888 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.336900 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:22.336909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:22.336982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:22.373362 1147424 cri.go:89] found id: ""
	I0731 21:31:22.373397 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.373409 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:22.373417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:22.373498 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:22.409578 1147424 cri.go:89] found id: ""
	I0731 21:31:22.409606 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.409614 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:22.409621 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:22.409675 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:22.446427 1147424 cri.go:89] found id: ""
	I0731 21:31:22.446458 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.446469 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:22.446477 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:22.446547 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:22.480629 1147424 cri.go:89] found id: ""
	I0731 21:31:22.480679 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.480691 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:22.480700 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:22.480769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:22.515017 1147424 cri.go:89] found id: ""
	I0731 21:31:22.515058 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.515070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:22.515078 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:22.515151 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:22.552433 1147424 cri.go:89] found id: ""
	I0731 21:31:22.552462 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.552470 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:22.552480 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:22.552493 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:22.567822 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:22.567862 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:22.640554 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:22.640585 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:22.640603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:22.732714 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:22.732776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:22.790478 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:22.790515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:21.941836 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.945561 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.639434 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.640934 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.765376 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.264346 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.352413 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:25.364739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:25.364828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:25.398119 1147424 cri.go:89] found id: ""
	I0731 21:31:25.398158 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.398171 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:25.398184 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:25.398255 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:25.432874 1147424 cri.go:89] found id: ""
	I0731 21:31:25.432908 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.432919 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:25.432928 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:25.432986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:25.467669 1147424 cri.go:89] found id: ""
	I0731 21:31:25.467702 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.467711 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:25.467717 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:25.467783 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:25.502331 1147424 cri.go:89] found id: ""
	I0731 21:31:25.502364 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.502373 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:25.502379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:25.502434 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:25.535888 1147424 cri.go:89] found id: ""
	I0731 21:31:25.535917 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.535924 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:25.535931 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:25.535990 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:25.568398 1147424 cri.go:89] found id: ""
	I0731 21:31:25.568427 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.568443 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:25.568451 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:25.568554 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:25.602724 1147424 cri.go:89] found id: ""
	I0731 21:31:25.602751 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.602759 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:25.602766 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:25.602825 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:25.635990 1147424 cri.go:89] found id: ""
	I0731 21:31:25.636021 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.636032 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:25.636045 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:25.636063 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:25.687984 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:25.688030 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:25.702979 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:25.703010 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:25.768470 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:25.768498 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:25.768519 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:25.845432 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:25.845481 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.383725 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:28.397046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:28.397130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:28.436675 1147424 cri.go:89] found id: ""
	I0731 21:31:28.436707 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.436716 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:28.436723 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:28.436780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:28.474084 1147424 cri.go:89] found id: ""
	I0731 21:31:28.474114 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.474122 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:28.474129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:28.474186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:28.512448 1147424 cri.go:89] found id: ""
	I0731 21:31:28.512485 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.512496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:28.512505 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:28.512575 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:28.557548 1147424 cri.go:89] found id: ""
	I0731 21:31:28.557579 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.557591 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:28.557599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:28.557664 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:28.600492 1147424 cri.go:89] found id: ""
	I0731 21:31:28.600526 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.600545 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:28.600553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:28.600628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:28.645067 1147424 cri.go:89] found id: ""
	I0731 21:31:28.645093 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.645101 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:28.645107 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:28.645171 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:28.678391 1147424 cri.go:89] found id: ""
	I0731 21:31:28.678431 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.678444 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:28.678452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:28.678522 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:28.712230 1147424 cri.go:89] found id: ""
	I0731 21:31:28.712260 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.712268 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:28.712278 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:28.712297 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:28.779362 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:28.779389 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:28.779403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:28.861192 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:28.861243 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.900747 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:28.900781 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:28.953135 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:28.953183 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:26.442998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.443518 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.943322 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.139072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.638724 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.264991 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.764482 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:31.467806 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:31.481274 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:31.481345 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:31.516704 1147424 cri.go:89] found id: ""
	I0731 21:31:31.516741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.516754 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:31.516765 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:31.516824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:31.553299 1147424 cri.go:89] found id: ""
	I0731 21:31:31.553332 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.553341 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:31.553348 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:31.553402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:31.587834 1147424 cri.go:89] found id: ""
	I0731 21:31:31.587864 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.587874 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:31.587881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:31.587939 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:31.623164 1147424 cri.go:89] found id: ""
	I0731 21:31:31.623194 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.623203 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:31.623209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:31.623265 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:31.659118 1147424 cri.go:89] found id: ""
	I0731 21:31:31.659151 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.659158 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:31.659165 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:31.659219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:31.697260 1147424 cri.go:89] found id: ""
	I0731 21:31:31.697297 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.697308 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:31.697317 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:31.697375 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:31.732585 1147424 cri.go:89] found id: ""
	I0731 21:31:31.732623 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.732635 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:31.732644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:31.732698 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:31.770922 1147424 cri.go:89] found id: ""
	I0731 21:31:31.770952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.770964 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:31.770976 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:31.770992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:31.823747 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:31.823805 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:31.837367 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:31.837406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:31.912937 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:31.912958 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:31.912972 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:31.991008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:31.991061 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.528933 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:34.552722 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:34.552807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:34.587277 1147424 cri.go:89] found id: ""
	I0731 21:31:34.587315 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.587326 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:34.587337 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:34.587417 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:34.619919 1147424 cri.go:89] found id: ""
	I0731 21:31:34.619952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.619961 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:34.619968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:34.620033 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:34.654967 1147424 cri.go:89] found id: ""
	I0731 21:31:34.655000 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.655007 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:34.655014 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:34.655066 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:34.689092 1147424 cri.go:89] found id: ""
	I0731 21:31:34.689128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.689139 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:34.689147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:34.689217 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:34.725112 1147424 cri.go:89] found id: ""
	I0731 21:31:34.725145 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.725153 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:34.725159 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:34.725215 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:34.760377 1147424 cri.go:89] found id: ""
	I0731 21:31:34.760411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.760422 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:34.760430 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:34.760500 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:34.796413 1147424 cri.go:89] found id: ""
	I0731 21:31:34.796445 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.796460 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:34.796468 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:34.796540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:34.833243 1147424 cri.go:89] found id: ""
	I0731 21:31:34.833277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.833288 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:34.833309 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:34.833328 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:32.943881 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:35.442928 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.638850 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.640521 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.766140 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.264336 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.268433 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.911486 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:34.911552 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.952167 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:34.952200 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:35.010995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:35.011041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:35.025756 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:35.025795 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:35.110465 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:37.610914 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:37.623848 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:37.623935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:37.660355 1147424 cri.go:89] found id: ""
	I0731 21:31:37.660384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.660392 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:37.660398 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:37.660456 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:37.694935 1147424 cri.go:89] found id: ""
	I0731 21:31:37.694966 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.694975 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:37.694982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:37.695048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:37.729438 1147424 cri.go:89] found id: ""
	I0731 21:31:37.729472 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.729485 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:37.729493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:37.729570 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:37.766412 1147424 cri.go:89] found id: ""
	I0731 21:31:37.766440 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.766449 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:37.766457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:37.766519 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:37.803830 1147424 cri.go:89] found id: ""
	I0731 21:31:37.803865 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.803875 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:37.803884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:37.803956 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:37.838698 1147424 cri.go:89] found id: ""
	I0731 21:31:37.838730 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.838741 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:37.838749 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:37.838819 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:37.873274 1147424 cri.go:89] found id: ""
	I0731 21:31:37.873312 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.873324 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:37.873332 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:37.873404 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:37.907801 1147424 cri.go:89] found id: ""
	I0731 21:31:37.907835 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.907859 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:37.907870 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:37.907893 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:37.962192 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:37.962233 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:37.976530 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:37.976577 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:38.048551 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:38.048584 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:38.048603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:38.122957 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:38.123003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:37.942944 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.442336 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.139834 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.141085 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.640176 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.766169 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:43.767226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.663623 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:40.677119 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:40.677184 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:40.710893 1147424 cri.go:89] found id: ""
	I0731 21:31:40.710923 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.710932 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:40.710939 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:40.710996 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:40.746166 1147424 cri.go:89] found id: ""
	I0731 21:31:40.746203 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.746216 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:40.746223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:40.746296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:40.789323 1147424 cri.go:89] found id: ""
	I0731 21:31:40.789353 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.789362 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:40.789368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:40.789433 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:40.826731 1147424 cri.go:89] found id: ""
	I0731 21:31:40.826766 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.826775 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:40.826782 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:40.826843 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:40.865533 1147424 cri.go:89] found id: ""
	I0731 21:31:40.865562 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.865570 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:40.865576 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:40.865628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:40.900523 1147424 cri.go:89] found id: ""
	I0731 21:31:40.900555 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.900564 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:40.900571 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:40.900628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:40.934140 1147424 cri.go:89] found id: ""
	I0731 21:31:40.934172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.934181 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:40.934187 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:40.934252 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:40.969989 1147424 cri.go:89] found id: ""
	I0731 21:31:40.970033 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.970045 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:40.970058 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:40.970076 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:41.021416 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:41.021464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:41.035947 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:41.035978 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:41.102101 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:41.102126 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:41.102141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:41.182412 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:41.182457 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:43.727586 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:43.740633 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:43.740725 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:43.775305 1147424 cri.go:89] found id: ""
	I0731 21:31:43.775343 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.775354 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:43.775363 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:43.775426 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:43.813410 1147424 cri.go:89] found id: ""
	I0731 21:31:43.813441 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.813449 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:43.813455 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:43.813510 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:43.848924 1147424 cri.go:89] found id: ""
	I0731 21:31:43.848959 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.848971 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:43.848979 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:43.849048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:43.884911 1147424 cri.go:89] found id: ""
	I0731 21:31:43.884950 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.884962 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:43.884971 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:43.885041 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:43.918244 1147424 cri.go:89] found id: ""
	I0731 21:31:43.918277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.918286 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:43.918292 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:43.918348 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:43.952166 1147424 cri.go:89] found id: ""
	I0731 21:31:43.952200 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.952211 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:43.952220 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:43.952299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:43.985756 1147424 cri.go:89] found id: ""
	I0731 21:31:43.985790 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.985850 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:43.985863 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:43.985916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:44.020480 1147424 cri.go:89] found id: ""
	I0731 21:31:44.020516 1147424 logs.go:276] 0 containers: []
	W0731 21:31:44.020528 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:44.020542 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:44.020560 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:44.058344 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:44.058398 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:44.110703 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:44.110751 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:44.124735 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:44.124771 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:44.193412 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:44.193445 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:44.193463 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:42.442910 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.443829 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.140083 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.640177 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.265466 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.265667 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.775651 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:46.789288 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:46.789384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.822997 1147424 cri.go:89] found id: ""
	I0731 21:31:46.823032 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.823044 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:46.823053 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:46.823123 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:46.857000 1147424 cri.go:89] found id: ""
	I0731 21:31:46.857030 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.857039 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:46.857046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:46.857112 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:46.890362 1147424 cri.go:89] found id: ""
	I0731 21:31:46.890392 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.890404 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:46.890417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:46.890483 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:46.922819 1147424 cri.go:89] found id: ""
	I0731 21:31:46.922848 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.922864 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:46.922871 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:46.922935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:46.957333 1147424 cri.go:89] found id: ""
	I0731 21:31:46.957363 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.957371 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:46.957376 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:46.957444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:46.990795 1147424 cri.go:89] found id: ""
	I0731 21:31:46.990830 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.990840 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:46.990849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:46.990922 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:47.025144 1147424 cri.go:89] found id: ""
	I0731 21:31:47.025174 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.025185 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:47.025194 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:47.025263 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:47.062624 1147424 cri.go:89] found id: ""
	I0731 21:31:47.062658 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.062667 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:47.062677 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:47.062691 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:47.112698 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:47.112742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:47.127240 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:47.127276 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:47.195034 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:47.195062 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:47.195081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:47.277532 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:47.277574 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:49.814610 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:49.828213 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:49.828291 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.944364 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.442118 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.640243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.640580 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.764302 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:52.764441 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.861951 1147424 cri.go:89] found id: ""
	I0731 21:31:49.861982 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.861991 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:49.861998 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:49.862054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:49.898601 1147424 cri.go:89] found id: ""
	I0731 21:31:49.898630 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.898638 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:49.898644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:49.898711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:49.933615 1147424 cri.go:89] found id: ""
	I0731 21:31:49.933652 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.933665 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:49.933673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:49.933742 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:49.970356 1147424 cri.go:89] found id: ""
	I0731 21:31:49.970395 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.970416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:49.970425 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:49.970503 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:50.004186 1147424 cri.go:89] found id: ""
	I0731 21:31:50.004220 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.004232 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:50.004241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:50.004316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:50.037701 1147424 cri.go:89] found id: ""
	I0731 21:31:50.037741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.037753 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:50.037761 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:50.037834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:50.074358 1147424 cri.go:89] found id: ""
	I0731 21:31:50.074390 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.074399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:50.074409 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:50.074474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:50.109052 1147424 cri.go:89] found id: ""
	I0731 21:31:50.109083 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.109091 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:50.109101 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:50.109116 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:50.167891 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:50.167935 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:50.181132 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:50.181179 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:50.247835 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:50.247865 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:50.247882 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:50.328733 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:50.328779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:52.867344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:52.880275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:52.880355 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:52.913980 1147424 cri.go:89] found id: ""
	I0731 21:31:52.914015 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.914024 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:52.914030 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:52.914095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:52.947833 1147424 cri.go:89] found id: ""
	I0731 21:31:52.947866 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.947874 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:52.947880 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:52.947947 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:52.981345 1147424 cri.go:89] found id: ""
	I0731 21:31:52.981380 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.981393 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:52.981401 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:52.981470 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:53.016253 1147424 cri.go:89] found id: ""
	I0731 21:31:53.016283 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.016292 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:53.016299 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:53.016351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:53.049683 1147424 cri.go:89] found id: ""
	I0731 21:31:53.049716 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.049726 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:53.049734 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:53.049807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:53.082171 1147424 cri.go:89] found id: ""
	I0731 21:31:53.082217 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.082228 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:53.082237 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:53.082308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:53.114595 1147424 cri.go:89] found id: ""
	I0731 21:31:53.114640 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.114658 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:53.114667 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:53.114739 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:53.151612 1147424 cri.go:89] found id: ""
	I0731 21:31:53.151644 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.151672 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:53.151686 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:53.151702 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:53.203251 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:53.203293 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:53.219234 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:53.219272 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:53.290273 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:53.290292 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:53.290306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:53.367967 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:53.368023 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:51.443058 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.943272 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.141370 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.638859 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.264069 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.265286 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.909173 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:55.922278 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:55.922351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:55.959354 1147424 cri.go:89] found id: ""
	I0731 21:31:55.959389 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.959397 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:55.959403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:55.959467 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:55.998507 1147424 cri.go:89] found id: ""
	I0731 21:31:55.998544 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.998557 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:55.998566 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:55.998638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:56.034763 1147424 cri.go:89] found id: ""
	I0731 21:31:56.034811 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.034824 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:56.034833 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:56.034914 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:56.068685 1147424 cri.go:89] found id: ""
	I0731 21:31:56.068726 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.068737 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:56.068746 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:56.068833 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:56.105785 1147424 cri.go:89] found id: ""
	I0731 21:31:56.105824 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.105837 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:56.105845 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:56.105920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:56.142701 1147424 cri.go:89] found id: ""
	I0731 21:31:56.142732 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.142744 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:56.142752 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:56.142834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:56.177016 1147424 cri.go:89] found id: ""
	I0731 21:31:56.177064 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.177077 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:56.177089 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:56.177163 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:56.211989 1147424 cri.go:89] found id: ""
	I0731 21:31:56.212026 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.212038 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:56.212052 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:56.212070 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:56.263995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:56.264045 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:56.277535 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:56.277570 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:56.343150 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:56.343179 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:56.343199 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:56.425361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:56.425406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:58.965276 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:58.978115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:58.978190 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:59.011793 1147424 cri.go:89] found id: ""
	I0731 21:31:59.011829 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.011840 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:59.011849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:59.011921 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:59.048117 1147424 cri.go:89] found id: ""
	I0731 21:31:59.048153 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.048164 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:59.048172 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:59.048240 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:59.081955 1147424 cri.go:89] found id: ""
	I0731 21:31:59.081985 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.081996 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:59.082004 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:59.082072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:59.116269 1147424 cri.go:89] found id: ""
	I0731 21:31:59.116308 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.116321 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:59.116330 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:59.116396 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:59.152551 1147424 cri.go:89] found id: ""
	I0731 21:31:59.152580 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.152592 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:59.152599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:59.152669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:59.186708 1147424 cri.go:89] found id: ""
	I0731 21:31:59.186749 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.186758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:59.186764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:59.186830 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:59.223628 1147424 cri.go:89] found id: ""
	I0731 21:31:59.223681 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.223690 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:59.223698 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:59.223773 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:59.256867 1147424 cri.go:89] found id: ""
	I0731 21:31:59.256901 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.256913 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:59.256925 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:59.256944 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:59.307167 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:59.307209 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:59.320958 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:59.320992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:59.390776 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:59.390798 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:59.390813 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:59.467482 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:59.467534 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:56.445461 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:58.943434 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.639271 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:00.139778 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:59.764344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:01.765157 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:04.264512 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.005084 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:02.017546 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:02.017635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:02.053094 1147424 cri.go:89] found id: ""
	I0731 21:32:02.053135 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.053146 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:02.053155 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:02.053212 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:02.087483 1147424 cri.go:89] found id: ""
	I0731 21:32:02.087517 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.087535 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:02.087543 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:02.087600 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:02.123647 1147424 cri.go:89] found id: ""
	I0731 21:32:02.123685 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.123696 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:02.123706 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:02.123764 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:02.157798 1147424 cri.go:89] found id: ""
	I0731 21:32:02.157828 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.157837 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:02.157843 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:02.157899 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:02.190266 1147424 cri.go:89] found id: ""
	I0731 21:32:02.190297 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.190309 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:02.190318 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:02.190377 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:02.232507 1147424 cri.go:89] found id: ""
	I0731 21:32:02.232537 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.232546 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:02.232552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:02.232605 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:02.270105 1147424 cri.go:89] found id: ""
	I0731 21:32:02.270133 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.270144 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:02.270152 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:02.270221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:02.304599 1147424 cri.go:89] found id: ""
	I0731 21:32:02.304631 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.304642 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:02.304654 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:02.304671 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:02.356686 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:02.356727 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:02.370114 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:02.370147 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:02.437753 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:02.437778 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:02.437797 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:02.518085 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:02.518131 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:01.443142 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:03.943209 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.640855 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.141191 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:06.265050 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.265389 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.071289 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:05.084496 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:05.084579 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:05.124178 1147424 cri.go:89] found id: ""
	I0731 21:32:05.124208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.124218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:05.124224 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:05.124279 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:05.162119 1147424 cri.go:89] found id: ""
	I0731 21:32:05.162155 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.162167 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:05.162173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:05.162237 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:05.198445 1147424 cri.go:89] found id: ""
	I0731 21:32:05.198483 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.198496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:05.198504 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:05.198615 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:05.240678 1147424 cri.go:89] found id: ""
	I0731 21:32:05.240702 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.240711 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:05.240718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:05.240770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:05.276910 1147424 cri.go:89] found id: ""
	I0731 21:32:05.276942 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.276965 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:05.276974 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:05.277051 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:05.310130 1147424 cri.go:89] found id: ""
	I0731 21:32:05.310158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.310166 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:05.310173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:05.310227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:05.345144 1147424 cri.go:89] found id: ""
	I0731 21:32:05.345179 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.345191 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:05.345199 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:05.345267 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:05.386723 1147424 cri.go:89] found id: ""
	I0731 21:32:05.386766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.386778 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:05.386792 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:05.386809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:05.425852 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:05.425887 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:05.482401 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:05.482447 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:05.495888 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:05.495918 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:05.562121 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:05.562153 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:05.562174 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.140837 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:08.153503 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:08.153585 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:08.187113 1147424 cri.go:89] found id: ""
	I0731 21:32:08.187143 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.187155 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:08.187164 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:08.187226 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:08.219853 1147424 cri.go:89] found id: ""
	I0731 21:32:08.219888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.219898 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:08.219906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:08.219976 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:08.253817 1147424 cri.go:89] found id: ""
	I0731 21:32:08.253848 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.253857 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:08.253864 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:08.253930 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:08.307069 1147424 cri.go:89] found id: ""
	I0731 21:32:08.307096 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.307104 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:08.307111 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:08.307176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:08.349604 1147424 cri.go:89] found id: ""
	I0731 21:32:08.349632 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.349641 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:08.349648 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:08.349711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:08.382966 1147424 cri.go:89] found id: ""
	I0731 21:32:08.383000 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.383013 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:08.383022 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:08.383080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:08.416904 1147424 cri.go:89] found id: ""
	I0731 21:32:08.416938 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.416950 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:08.416958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:08.417021 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:08.451024 1147424 cri.go:89] found id: ""
	I0731 21:32:08.451061 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.451074 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:08.451087 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:08.451103 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.530394 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:08.530441 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:08.567554 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:08.567583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:08.616162 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:08.616208 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:08.629228 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:08.629264 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:08.700820 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:06.441762 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.443004 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.942870 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:07.638970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.139278 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.764866 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:13.265303 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:11.201091 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:11.213847 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:11.213920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:11.248925 1147424 cri.go:89] found id: ""
	I0731 21:32:11.248963 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.248974 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:11.248982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:11.249054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:11.286134 1147424 cri.go:89] found id: ""
	I0731 21:32:11.286168 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.286185 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:11.286193 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:11.286261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:11.321493 1147424 cri.go:89] found id: ""
	I0731 21:32:11.321524 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.321534 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:11.321542 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:11.321610 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:11.356679 1147424 cri.go:89] found id: ""
	I0731 21:32:11.356708 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.356724 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:11.356731 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:11.356788 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:11.390757 1147424 cri.go:89] found id: ""
	I0731 21:32:11.390785 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.390795 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:11.390802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:11.390868 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:11.424687 1147424 cri.go:89] found id: ""
	I0731 21:32:11.424724 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.424736 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:11.424745 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:11.424816 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:11.458542 1147424 cri.go:89] found id: ""
	I0731 21:32:11.458579 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.458590 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:11.458599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:11.458678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:11.490956 1147424 cri.go:89] found id: ""
	I0731 21:32:11.490999 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.491009 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:11.491020 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:11.491036 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:11.541013 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:11.541057 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:11.554729 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:11.554760 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:11.619828 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:11.619868 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:11.619894 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:11.697785 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:11.697837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.235153 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:14.247701 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:14.247770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:14.282802 1147424 cri.go:89] found id: ""
	I0731 21:32:14.282835 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.282846 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:14.282854 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:14.282926 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:14.316106 1147424 cri.go:89] found id: ""
	I0731 21:32:14.316158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.316168 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:14.316175 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:14.316235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:14.349319 1147424 cri.go:89] found id: ""
	I0731 21:32:14.349358 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.349370 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:14.349379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:14.349446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:14.385630 1147424 cri.go:89] found id: ""
	I0731 21:32:14.385665 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.385674 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:14.385681 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:14.385745 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:14.422054 1147424 cri.go:89] found id: ""
	I0731 21:32:14.422090 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.422104 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:14.422113 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:14.422176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:14.456170 1147424 cri.go:89] found id: ""
	I0731 21:32:14.456207 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.456216 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:14.456223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:14.456283 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:14.489571 1147424 cri.go:89] found id: ""
	I0731 21:32:14.489611 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.489622 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:14.489632 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:14.489709 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:14.524764 1147424 cri.go:89] found id: ""
	I0731 21:32:14.524803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.524814 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:14.524827 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:14.524843 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:14.598487 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:14.598511 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:14.598526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:14.675912 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:14.675954 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.722740 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:14.722778 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:14.780558 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:14.780604 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:13.441757 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.442992 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:12.140024 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:14.638468 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:16.639109 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.764963 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.265010 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:17.300221 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:17.313242 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:17.313309 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:17.349244 1147424 cri.go:89] found id: ""
	I0731 21:32:17.349276 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.349284 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:17.349293 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:17.349364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:17.382158 1147424 cri.go:89] found id: ""
	I0731 21:32:17.382188 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.382196 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:17.382203 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:17.382276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:17.416250 1147424 cri.go:89] found id: ""
	I0731 21:32:17.416283 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.416295 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:17.416304 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:17.416363 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:17.449192 1147424 cri.go:89] found id: ""
	I0731 21:32:17.449229 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.449240 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:17.449249 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:17.449316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:17.482189 1147424 cri.go:89] found id: ""
	I0731 21:32:17.482223 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.482235 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:17.482244 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:17.482308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:17.516284 1147424 cri.go:89] found id: ""
	I0731 21:32:17.516312 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.516320 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:17.516327 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:17.516380 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:17.550025 1147424 cri.go:89] found id: ""
	I0731 21:32:17.550059 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.550070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:17.550077 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:17.550142 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:17.582378 1147424 cri.go:89] found id: ""
	I0731 21:32:17.582411 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.582424 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:17.582488 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:17.582513 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:17.635593 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:17.635640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:17.649694 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:17.649734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:17.716275 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:17.716301 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:17.716316 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:17.800261 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:17.800327 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:17.942859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:19.943179 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.639313 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.639947 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.265670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:22.764461 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.339222 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:20.353494 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:20.353574 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:20.387397 1147424 cri.go:89] found id: ""
	I0731 21:32:20.387432 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.387441 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:20.387449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:20.387534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:20.421038 1147424 cri.go:89] found id: ""
	I0731 21:32:20.421074 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.421082 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:20.421088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:20.421200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:20.461171 1147424 cri.go:89] found id: ""
	I0731 21:32:20.461208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.461221 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:20.461229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:20.461297 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:20.529655 1147424 cri.go:89] found id: ""
	I0731 21:32:20.529692 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.529704 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:20.529712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:20.529779 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:20.584293 1147424 cri.go:89] found id: ""
	I0731 21:32:20.584327 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.584337 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:20.584344 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:20.584399 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:20.617177 1147424 cri.go:89] found id: ""
	I0731 21:32:20.617209 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.617220 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:20.617226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:20.617282 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:20.657058 1147424 cri.go:89] found id: ""
	I0731 21:32:20.657094 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.657104 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:20.657112 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:20.657181 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:20.689987 1147424 cri.go:89] found id: ""
	I0731 21:32:20.690016 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.690026 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:20.690038 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:20.690058 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:20.702274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:20.702310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:20.766054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:20.766088 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:20.766106 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:20.850776 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:20.850823 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:20.888735 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:20.888766 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.440658 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:23.453529 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:23.453616 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:23.487210 1147424 cri.go:89] found id: ""
	I0731 21:32:23.487249 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.487263 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:23.487271 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:23.487338 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:23.520656 1147424 cri.go:89] found id: ""
	I0731 21:32:23.520697 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.520709 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:23.520718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:23.520794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:23.557952 1147424 cri.go:89] found id: ""
	I0731 21:32:23.557982 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.557991 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:23.557999 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:23.558052 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:23.591428 1147424 cri.go:89] found id: ""
	I0731 21:32:23.591458 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.591466 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:23.591473 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:23.591537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:23.624978 1147424 cri.go:89] found id: ""
	I0731 21:32:23.625009 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.625019 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:23.625026 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:23.625080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:23.659424 1147424 cri.go:89] found id: ""
	I0731 21:32:23.659460 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.659473 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:23.659482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:23.659557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:23.696695 1147424 cri.go:89] found id: ""
	I0731 21:32:23.696733 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.696745 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:23.696753 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:23.696818 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:23.734067 1147424 cri.go:89] found id: ""
	I0731 21:32:23.734097 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.734106 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:23.734116 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:23.734130 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.787432 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:23.787476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:23.801116 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:23.801154 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:23.867801 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:23.867840 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:23.867859 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:23.952393 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:23.952435 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:22.442859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:24.943043 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:23.139590 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.140770 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.264790 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.763670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:26.490759 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:26.503050 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:26.503120 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:26.536191 1147424 cri.go:89] found id: ""
	I0731 21:32:26.536239 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.536251 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:26.536260 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:26.536330 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:26.571038 1147424 cri.go:89] found id: ""
	I0731 21:32:26.571075 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.571088 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:26.571096 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:26.571164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:26.605295 1147424 cri.go:89] found id: ""
	I0731 21:32:26.605333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.605346 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:26.605355 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:26.605422 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:26.644430 1147424 cri.go:89] found id: ""
	I0731 21:32:26.644472 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.644482 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:26.644489 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:26.644553 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:26.675985 1147424 cri.go:89] found id: ""
	I0731 21:32:26.676020 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.676033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:26.676041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:26.676128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:26.707738 1147424 cri.go:89] found id: ""
	I0731 21:32:26.707766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.707780 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:26.707787 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:26.707850 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:26.743969 1147424 cri.go:89] found id: ""
	I0731 21:32:26.743998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.744007 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:26.744013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:26.744067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:26.782301 1147424 cri.go:89] found id: ""
	I0731 21:32:26.782333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.782346 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:26.782361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:26.782377 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:26.818548 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:26.818580 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.870586 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:26.870632 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:26.883944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:26.883983 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:26.951603 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:26.951630 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:26.951648 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:29.527796 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:29.540627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:29.540862 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:29.575513 1147424 cri.go:89] found id: ""
	I0731 21:32:29.575544 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.575553 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:29.575559 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:29.575627 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:29.607395 1147424 cri.go:89] found id: ""
	I0731 21:32:29.607425 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.607434 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:29.607440 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:29.607505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:29.641509 1147424 cri.go:89] found id: ""
	I0731 21:32:29.641539 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.641548 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:29.641553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:29.641604 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:29.673166 1147424 cri.go:89] found id: ""
	I0731 21:32:29.673197 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.673207 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:29.673215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:29.673285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:29.703698 1147424 cri.go:89] found id: ""
	I0731 21:32:29.703744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.703752 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:29.703759 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:29.703821 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:29.738704 1147424 cri.go:89] found id: ""
	I0731 21:32:29.738746 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.738758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:29.738767 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:29.738858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:29.771359 1147424 cri.go:89] found id: ""
	I0731 21:32:29.771388 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.771399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:29.771407 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:29.771474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:29.806579 1147424 cri.go:89] found id: ""
	I0731 21:32:29.806614 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.806625 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:29.806635 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:29.806649 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.943079 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.442599 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.638623 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.639949 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.764393 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:31.764649 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.764888 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.857957 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:29.857994 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:29.871348 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:29.871387 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:29.942833 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:29.942864 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:29.942880 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:30.027254 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:30.027306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:32.565077 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:32.577796 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:32.577878 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:32.611725 1147424 cri.go:89] found id: ""
	I0731 21:32:32.611762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.611774 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:32.611783 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:32.611859 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:32.647901 1147424 cri.go:89] found id: ""
	I0731 21:32:32.647939 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.647951 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:32.647959 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:32.648018 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:32.681042 1147424 cri.go:89] found id: ""
	I0731 21:32:32.681073 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.681084 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:32.681091 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:32.681162 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:32.716141 1147424 cri.go:89] found id: ""
	I0731 21:32:32.716173 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.716182 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:32.716188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:32.716242 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:32.753207 1147424 cri.go:89] found id: ""
	I0731 21:32:32.753236 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.753244 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:32.753250 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:32.753301 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:32.787591 1147424 cri.go:89] found id: ""
	I0731 21:32:32.787619 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.787628 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:32.787635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:32.787717 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:32.822430 1147424 cri.go:89] found id: ""
	I0731 21:32:32.822464 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.822476 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:32.822484 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:32.822544 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:32.854566 1147424 cri.go:89] found id: ""
	I0731 21:32:32.854600 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.854609 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:32.854621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:32.854636 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:32.905256 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:32.905310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:32.918575 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:32.918607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:32.981644 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:32.981669 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:32.981685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:33.062767 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:33.062814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:31.443380 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.942793 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.943502 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:32.139483 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:34.140185 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.638720 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.264481 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.265008 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.599598 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:35.612328 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:35.612403 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:35.647395 1147424 cri.go:89] found id: ""
	I0731 21:32:35.647428 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.647439 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:35.647448 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:35.647514 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:35.682339 1147424 cri.go:89] found id: ""
	I0731 21:32:35.682370 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.682378 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:35.682384 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:35.682440 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:35.721727 1147424 cri.go:89] found id: ""
	I0731 21:32:35.721762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.721775 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:35.721784 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:35.721866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:35.754648 1147424 cri.go:89] found id: ""
	I0731 21:32:35.754678 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.754688 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:35.754697 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:35.754761 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:35.787880 1147424 cri.go:89] found id: ""
	I0731 21:32:35.787910 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.787922 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:35.787930 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:35.788004 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:35.822619 1147424 cri.go:89] found id: ""
	I0731 21:32:35.822656 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.822668 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:35.822677 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:35.822743 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:35.856160 1147424 cri.go:89] found id: ""
	I0731 21:32:35.856198 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.856210 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:35.856219 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:35.856284 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:35.888842 1147424 cri.go:89] found id: ""
	I0731 21:32:35.888881 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.888893 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:35.888906 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:35.888924 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:35.956296 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:35.956323 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:35.956342 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:36.039485 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:36.039531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:36.081202 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:36.081247 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:36.130789 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:36.130831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:38.647723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:38.660334 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:38.660405 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:38.696782 1147424 cri.go:89] found id: ""
	I0731 21:32:38.696813 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.696822 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:38.696828 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:38.696887 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:38.731835 1147424 cri.go:89] found id: ""
	I0731 21:32:38.731874 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.731887 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:38.731895 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:38.731969 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:38.768894 1147424 cri.go:89] found id: ""
	I0731 21:32:38.768924 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.768935 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:38.768943 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:38.769012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:38.802331 1147424 cri.go:89] found id: ""
	I0731 21:32:38.802361 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.802370 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:38.802377 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:38.802430 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:38.835822 1147424 cri.go:89] found id: ""
	I0731 21:32:38.835852 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.835864 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:38.835881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:38.835940 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:38.869104 1147424 cri.go:89] found id: ""
	I0731 21:32:38.869141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.869153 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:38.869162 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:38.869234 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:38.907732 1147424 cri.go:89] found id: ""
	I0731 21:32:38.907769 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.907781 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:38.907789 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:38.907858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:38.942961 1147424 cri.go:89] found id: ""
	I0731 21:32:38.942994 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.943005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:38.943017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:38.943032 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:38.997537 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:38.997584 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:39.011711 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:39.011745 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:39.082834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:39.082861 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:39.082878 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:39.168702 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:39.168758 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:38.442196 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.943085 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.639586 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.140158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.764887 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.265118 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.706713 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:41.720209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:41.720298 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:41.752969 1147424 cri.go:89] found id: ""
	I0731 21:32:41.753005 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.753016 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:41.753025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:41.753095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:41.786502 1147424 cri.go:89] found id: ""
	I0731 21:32:41.786542 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.786555 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:41.786564 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:41.786635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:41.819958 1147424 cri.go:89] found id: ""
	I0731 21:32:41.819989 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.820000 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:41.820008 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:41.820073 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:41.855104 1147424 cri.go:89] found id: ""
	I0731 21:32:41.855141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.855153 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:41.855161 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:41.855228 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:41.889375 1147424 cri.go:89] found id: ""
	I0731 21:32:41.889413 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.889423 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:41.889429 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:41.889505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:41.925172 1147424 cri.go:89] found id: ""
	I0731 21:32:41.925199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.925208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:41.925215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:41.925278 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:41.960951 1147424 cri.go:89] found id: ""
	I0731 21:32:41.960995 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.961009 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:41.961017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:41.961086 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:41.996458 1147424 cri.go:89] found id: ""
	I0731 21:32:41.996493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.996506 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:41.996519 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:41.996537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:42.048841 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:42.048889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:42.062235 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:42.062271 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:42.131510 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:42.131536 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:42.131551 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:42.216993 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:42.217035 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:44.756236 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:44.769719 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:44.769800 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:44.808963 1147424 cri.go:89] found id: ""
	I0731 21:32:44.808998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.809009 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:44.809017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:44.809095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:44.843163 1147424 cri.go:89] found id: ""
	I0731 21:32:44.843199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.843212 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:44.843225 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:44.843287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:42.943536 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.443141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.140264 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.140607 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.764757 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.765226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:44.877440 1147424 cri.go:89] found id: ""
	I0731 21:32:44.877468 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.877477 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:44.877483 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:44.877537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:44.911877 1147424 cri.go:89] found id: ""
	I0731 21:32:44.911906 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.911915 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:44.911922 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:44.911974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:44.945516 1147424 cri.go:89] found id: ""
	I0731 21:32:44.945547 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.945558 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:44.945565 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:44.945634 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:44.983858 1147424 cri.go:89] found id: ""
	I0731 21:32:44.983890 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.983898 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:44.983906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:44.983981 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:45.017030 1147424 cri.go:89] found id: ""
	I0731 21:32:45.017064 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.017075 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:45.017084 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:45.017154 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:45.051005 1147424 cri.go:89] found id: ""
	I0731 21:32:45.051040 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.051053 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:45.051064 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:45.051077 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:45.100602 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:45.100646 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:45.113843 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:45.113891 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:45.187725 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:45.187760 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:45.187779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:45.273549 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:45.273588 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:47.813567 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:47.826674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:47.826762 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:47.863746 1147424 cri.go:89] found id: ""
	I0731 21:32:47.863781 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.863789 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:47.863797 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:47.863860 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:47.901125 1147424 cri.go:89] found id: ""
	I0731 21:32:47.901158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.901169 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:47.901177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:47.901247 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:47.936510 1147424 cri.go:89] found id: ""
	I0731 21:32:47.936543 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.936553 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:47.936560 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:47.936618 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:47.972712 1147424 cri.go:89] found id: ""
	I0731 21:32:47.972744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.972754 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:47.972764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:47.972828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:48.007785 1147424 cri.go:89] found id: ""
	I0731 21:32:48.007818 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.007831 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:48.007839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:48.007907 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:48.045821 1147424 cri.go:89] found id: ""
	I0731 21:32:48.045851 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.045863 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:48.045872 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:48.045945 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:48.083790 1147424 cri.go:89] found id: ""
	I0731 21:32:48.083823 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.083832 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:48.083839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:48.083903 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:48.122430 1147424 cri.go:89] found id: ""
	I0731 21:32:48.122465 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.122477 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:48.122490 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:48.122505 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:48.200081 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:48.200140 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:48.240500 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:48.240537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:48.292336 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:48.292393 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:48.305398 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:48.305431 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:48.381327 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:47.943158 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.945740 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.638897 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.640039 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.269263 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.765262 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.881554 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:50.894655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:50.894740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:50.928819 1147424 cri.go:89] found id: ""
	I0731 21:32:50.928861 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.928873 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:50.928882 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:50.928950 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:50.962856 1147424 cri.go:89] found id: ""
	I0731 21:32:50.962897 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.962908 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:50.962917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:50.962980 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:50.995765 1147424 cri.go:89] found id: ""
	I0731 21:32:50.995803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.995815 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:50.995823 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:50.995892 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:51.034418 1147424 cri.go:89] found id: ""
	I0731 21:32:51.034454 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.034467 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:51.034476 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:51.034534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:51.070687 1147424 cri.go:89] found id: ""
	I0731 21:32:51.070723 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.070732 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:51.070739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:51.070828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:51.106934 1147424 cri.go:89] found id: ""
	I0731 21:32:51.106959 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.106966 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:51.106973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:51.107026 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:51.143489 1147424 cri.go:89] found id: ""
	I0731 21:32:51.143513 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.143522 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:51.143530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:51.143591 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:51.180778 1147424 cri.go:89] found id: ""
	I0731 21:32:51.180806 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.180816 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:51.180827 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:51.180842 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:51.194695 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:51.194734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:51.262172 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:51.262200 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:51.262220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:51.344678 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:51.344719 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:51.383624 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:51.383659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:53.936339 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:53.950362 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:53.950446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:53.984346 1147424 cri.go:89] found id: ""
	I0731 21:32:53.984376 1147424 logs.go:276] 0 containers: []
	W0731 21:32:53.984391 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:53.984403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:53.984481 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:54.019937 1147424 cri.go:89] found id: ""
	I0731 21:32:54.019973 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.019986 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:54.019994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:54.020070 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:54.056068 1147424 cri.go:89] found id: ""
	I0731 21:32:54.056120 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.056133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:54.056142 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:54.056221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:54.094375 1147424 cri.go:89] found id: ""
	I0731 21:32:54.094407 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.094416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:54.094422 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:54.094486 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:54.130326 1147424 cri.go:89] found id: ""
	I0731 21:32:54.130362 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.130374 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:54.130383 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:54.130444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:54.168190 1147424 cri.go:89] found id: ""
	I0731 21:32:54.168228 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.168239 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:54.168248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:54.168329 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:54.201946 1147424 cri.go:89] found id: ""
	I0731 21:32:54.201979 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.201988 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:54.201994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:54.202055 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:54.233852 1147424 cri.go:89] found id: ""
	I0731 21:32:54.233888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.233896 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:54.233907 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:54.233922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:54.287620 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:54.287664 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:54.309984 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:54.310019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:54.382751 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:54.382774 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:54.382789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:54.460042 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:54.460105 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:52.443844 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.943970 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.140449 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.141072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:56.639439 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:55.264301 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.265478 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.002945 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:57.015673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:57.015763 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:57.049464 1147424 cri.go:89] found id: ""
	I0731 21:32:57.049493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.049502 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:57.049509 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:57.049561 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:57.083326 1147424 cri.go:89] found id: ""
	I0731 21:32:57.083356 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.083365 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:57.083371 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:57.083431 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:57.115103 1147424 cri.go:89] found id: ""
	I0731 21:32:57.115132 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.115141 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:57.115147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:57.115200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:57.153178 1147424 cri.go:89] found id: ""
	I0731 21:32:57.153214 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.153226 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:57.153234 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:57.153310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:57.187940 1147424 cri.go:89] found id: ""
	I0731 21:32:57.187980 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.187992 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:57.188001 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:57.188072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:57.221825 1147424 cri.go:89] found id: ""
	I0731 21:32:57.221858 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.221868 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:57.221884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:57.221948 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:57.255087 1147424 cri.go:89] found id: ""
	I0731 21:32:57.255115 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.255128 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:57.255137 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:57.255207 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:57.290095 1147424 cri.go:89] found id: ""
	I0731 21:32:57.290131 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.290143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:57.290157 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:57.290175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:57.343777 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:57.343819 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:57.356944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:57.356981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:57.431220 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:57.431248 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:57.431267 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:57.518079 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:57.518123 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:57.442671 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.942490 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:58.639801 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.139266 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.764738 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.765367 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:04.265447 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:00.056208 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:00.069424 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:00.069511 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:00.105855 1147424 cri.go:89] found id: ""
	I0731 21:33:00.105891 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.105902 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:00.105909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:00.105984 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:00.143079 1147424 cri.go:89] found id: ""
	I0731 21:33:00.143109 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.143120 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:00.143128 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:00.143195 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:00.178114 1147424 cri.go:89] found id: ""
	I0731 21:33:00.178150 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.178162 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:00.178171 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:00.178235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:00.212518 1147424 cri.go:89] found id: ""
	I0731 21:33:00.212547 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.212556 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:00.212562 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:00.212626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:00.246653 1147424 cri.go:89] found id: ""
	I0731 21:33:00.246683 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.246693 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:00.246702 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:00.246795 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:00.280163 1147424 cri.go:89] found id: ""
	I0731 21:33:00.280196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.280208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:00.280216 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:00.280285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:00.313593 1147424 cri.go:89] found id: ""
	I0731 21:33:00.313622 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.313631 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:00.313637 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:00.313691 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:00.347809 1147424 cri.go:89] found id: ""
	I0731 21:33:00.347838 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.347846 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:00.347858 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:00.347870 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:00.360481 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:00.360515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:00.433834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:00.433855 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:00.433869 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:00.513679 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:00.513721 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:00.551415 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:00.551466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.101928 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:03.114183 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:03.114262 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:03.152397 1147424 cri.go:89] found id: ""
	I0731 21:33:03.152427 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.152442 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:03.152449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:03.152505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:03.186595 1147424 cri.go:89] found id: ""
	I0731 21:33:03.186626 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.186640 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:03.186647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:03.186700 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:03.219085 1147424 cri.go:89] found id: ""
	I0731 21:33:03.219116 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.219126 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:03.219135 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:03.219201 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:03.251541 1147424 cri.go:89] found id: ""
	I0731 21:33:03.251573 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.251583 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:03.251592 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:03.251660 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:03.287880 1147424 cri.go:89] found id: ""
	I0731 21:33:03.287911 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.287920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:03.287927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:03.287992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:03.320317 1147424 cri.go:89] found id: ""
	I0731 21:33:03.320352 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.320361 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:03.320367 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:03.320423 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:03.355185 1147424 cri.go:89] found id: ""
	I0731 21:33:03.355213 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.355222 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:03.355228 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:03.355281 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:03.389900 1147424 cri.go:89] found id: ""
	I0731 21:33:03.389933 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.389941 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:03.389951 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:03.389985 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:03.427299 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:03.427331 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.480994 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:03.481037 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:03.494372 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:03.494403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:03.565542 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:03.565568 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:03.565583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:01.942941 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.943391 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.140871 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:05.141254 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.764762 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.264188 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.146397 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:06.159705 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:06.159791 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:06.195594 1147424 cri.go:89] found id: ""
	I0731 21:33:06.195628 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.195640 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:06.195649 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:06.195726 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:06.230163 1147424 cri.go:89] found id: ""
	I0731 21:33:06.230216 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.230229 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:06.230239 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:06.230313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:06.266937 1147424 cri.go:89] found id: ""
	I0731 21:33:06.266968 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.266979 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:06.266986 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:06.267048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:06.299791 1147424 cri.go:89] found id: ""
	I0731 21:33:06.299828 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.299838 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:06.299849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:06.299906 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:06.333861 1147424 cri.go:89] found id: ""
	I0731 21:33:06.333900 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.333912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:06.333920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:06.333991 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:06.366156 1147424 cri.go:89] found id: ""
	I0731 21:33:06.366196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.366208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:06.366217 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:06.366292 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:06.400567 1147424 cri.go:89] found id: ""
	I0731 21:33:06.400598 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.400607 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:06.400613 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:06.400665 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:06.443745 1147424 cri.go:89] found id: ""
	I0731 21:33:06.443771 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.443782 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:06.443794 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:06.443809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:06.530140 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:06.530189 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.570842 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:06.570883 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:06.621760 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:06.621800 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:06.636562 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:06.636602 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:06.702451 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.203607 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:09.215590 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:09.215678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:09.253063 1147424 cri.go:89] found id: ""
	I0731 21:33:09.253092 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.253101 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:09.253108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:09.253159 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:09.287000 1147424 cri.go:89] found id: ""
	I0731 21:33:09.287036 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.287051 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:09.287060 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:09.287117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:09.321173 1147424 cri.go:89] found id: ""
	I0731 21:33:09.321211 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.321223 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:09.321232 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:09.321287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:09.356860 1147424 cri.go:89] found id: ""
	I0731 21:33:09.356896 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.356908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:09.356918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:09.356979 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:09.390469 1147424 cri.go:89] found id: ""
	I0731 21:33:09.390509 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.390520 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:09.390528 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:09.390601 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:09.426265 1147424 cri.go:89] found id: ""
	I0731 21:33:09.426295 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.426304 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:09.426311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:09.426376 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:09.460197 1147424 cri.go:89] found id: ""
	I0731 21:33:09.460234 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.460246 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:09.460254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:09.460313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:09.492708 1147424 cri.go:89] found id: ""
	I0731 21:33:09.492737 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.492745 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:09.492757 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:09.492769 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:09.543768 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:09.543814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:09.557496 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:09.557531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:09.622956 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.622994 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:09.623012 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:09.700157 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:09.700202 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.443888 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:08.942866 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:07.638676 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.639158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.639719 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.264932 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.763994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:12.238767 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:12.258742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:12.258829 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:12.319452 1147424 cri.go:89] found id: ""
	I0731 21:33:12.319501 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.319514 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:12.319523 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:12.319596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:12.353740 1147424 cri.go:89] found id: ""
	I0731 21:33:12.353777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.353789 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:12.353798 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:12.353872 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:12.387735 1147424 cri.go:89] found id: ""
	I0731 21:33:12.387777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.387790 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:12.387799 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:12.387864 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:12.420145 1147424 cri.go:89] found id: ""
	I0731 21:33:12.420184 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.420196 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:12.420204 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:12.420261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:12.454861 1147424 cri.go:89] found id: ""
	I0731 21:33:12.454899 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.454912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:12.454920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:12.454993 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:12.487910 1147424 cri.go:89] found id: ""
	I0731 21:33:12.487938 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.487946 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:12.487954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:12.488007 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:12.524634 1147424 cri.go:89] found id: ""
	I0731 21:33:12.524663 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.524672 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:12.524678 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:12.524747 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:12.557542 1147424 cri.go:89] found id: ""
	I0731 21:33:12.557572 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.557581 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:12.557592 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:12.557605 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:12.638725 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:12.638767 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:12.675009 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:12.675041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:12.725508 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:12.725556 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:12.739281 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:12.739315 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:12.809186 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:11.443163 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.942775 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.944913 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:14.140466 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:16.639513 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.764068 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:17.764557 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.310278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:15.323392 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:15.323489 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:15.356737 1147424 cri.go:89] found id: ""
	I0731 21:33:15.356768 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.356779 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:15.356794 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:15.356870 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:15.389979 1147424 cri.go:89] found id: ""
	I0731 21:33:15.390018 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.390027 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:15.390033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:15.390097 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:15.422777 1147424 cri.go:89] found id: ""
	I0731 21:33:15.422810 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.422818 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:15.422825 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:15.422880 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:15.457962 1147424 cri.go:89] found id: ""
	I0731 21:33:15.458000 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.458012 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:15.458021 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:15.458088 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:15.495495 1147424 cri.go:89] found id: ""
	I0731 21:33:15.495528 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.495539 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:15.495552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:15.495611 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:15.528671 1147424 cri.go:89] found id: ""
	I0731 21:33:15.528700 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.528709 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:15.528715 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:15.528782 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:15.562579 1147424 cri.go:89] found id: ""
	I0731 21:33:15.562609 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.562617 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:15.562623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:15.562688 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:15.597326 1147424 cri.go:89] found id: ""
	I0731 21:33:15.597362 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.597374 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:15.597387 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:15.597406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:15.611017 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:15.611049 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:15.679729 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:15.679756 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:15.679776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:15.763719 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:15.763764 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:15.801974 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:15.802003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.350340 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:18.362952 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:18.363030 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:18.396153 1147424 cri.go:89] found id: ""
	I0731 21:33:18.396207 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.396218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:18.396227 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:18.396300 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:18.429261 1147424 cri.go:89] found id: ""
	I0731 21:33:18.429291 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.429302 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:18.429311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:18.429386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:18.462056 1147424 cri.go:89] found id: ""
	I0731 21:33:18.462093 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.462105 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:18.462115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:18.462189 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:18.494847 1147424 cri.go:89] found id: ""
	I0731 21:33:18.494887 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.494900 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:18.494908 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:18.494974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:18.527982 1147424 cri.go:89] found id: ""
	I0731 21:33:18.528020 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.528033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:18.528041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:18.528137 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:18.562114 1147424 cri.go:89] found id: ""
	I0731 21:33:18.562148 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.562159 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:18.562168 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:18.562227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:18.600226 1147424 cri.go:89] found id: ""
	I0731 21:33:18.600256 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.600267 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:18.600275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:18.600346 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:18.635899 1147424 cri.go:89] found id: ""
	I0731 21:33:18.635935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.635947 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:18.635960 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:18.635976 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.687338 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:18.687380 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:18.700274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:18.700308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:18.772852 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:18.772882 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:18.772900 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:18.854876 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:18.854919 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:18.442684 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:20.942998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.139878 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.139917 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.764588 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.765547 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:22.759208 1147232 pod_ready.go:81] duration metric: took 4m0.00082409s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	E0731 21:33:22.759249 1147232 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:33:22.759276 1147232 pod_ready.go:38] duration metric: took 4m11.578718686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:33:22.759313 1147232 kubeadm.go:597] duration metric: took 4m19.399292481s to restartPrimaryControlPlane
	W0731 21:33:22.759429 1147232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:22.759478 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:21.392589 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:21.405646 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:21.405767 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:21.441055 1147424 cri.go:89] found id: ""
	I0731 21:33:21.441088 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.441100 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:21.441108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:21.441173 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:21.474545 1147424 cri.go:89] found id: ""
	I0731 21:33:21.474583 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.474593 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:21.474599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:21.474654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:21.506004 1147424 cri.go:89] found id: ""
	I0731 21:33:21.506032 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.506041 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:21.506047 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:21.506115 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:21.539842 1147424 cri.go:89] found id: ""
	I0731 21:33:21.539880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.539893 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:21.539902 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:21.539966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:21.573913 1147424 cri.go:89] found id: ""
	I0731 21:33:21.573943 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.573951 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:21.573958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:21.574012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:21.608677 1147424 cri.go:89] found id: ""
	I0731 21:33:21.608715 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.608727 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:21.608736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:21.608811 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:21.642032 1147424 cri.go:89] found id: ""
	I0731 21:33:21.642063 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.642073 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:21.642082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:21.642146 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:21.676279 1147424 cri.go:89] found id: ""
	I0731 21:33:21.676312 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.676322 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:21.676332 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:21.676346 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:21.688928 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:21.688981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:21.757596 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:21.757620 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:21.757637 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:21.836301 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:21.836350 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:21.873553 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:21.873594 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.427756 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:24.440917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:24.440998 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:24.475902 1147424 cri.go:89] found id: ""
	I0731 21:33:24.475935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.475946 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:24.475954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:24.476031 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:24.509078 1147424 cri.go:89] found id: ""
	I0731 21:33:24.509115 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.509128 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:24.509136 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:24.509205 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:24.542466 1147424 cri.go:89] found id: ""
	I0731 21:33:24.542506 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.542518 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:24.542527 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:24.542589 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:24.579457 1147424 cri.go:89] found id: ""
	I0731 21:33:24.579496 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.579515 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:24.579524 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:24.579596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:24.623843 1147424 cri.go:89] found id: ""
	I0731 21:33:24.623880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.623891 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:24.623899 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:24.623971 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:24.661401 1147424 cri.go:89] found id: ""
	I0731 21:33:24.661437 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.661448 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:24.661457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:24.661526 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:24.694521 1147424 cri.go:89] found id: ""
	I0731 21:33:24.694551 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.694559 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:24.694567 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:24.694657 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:24.730530 1147424 cri.go:89] found id: ""
	I0731 21:33:24.730566 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.730578 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:24.730591 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:24.730607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.801836 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:24.801890 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:24.817753 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:24.817803 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:33:23.444464 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.942484 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:23.140282 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.638870 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	W0731 21:33:24.901125 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:24.901154 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:24.901170 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:24.984008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:24.984054 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:27.533575 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:27.546174 1147424 kubeadm.go:597] duration metric: took 4m1.98040234s to restartPrimaryControlPlane
	W0731 21:33:27.546264 1147424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:27.546291 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:28.848116 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.301779163s)
	I0731 21:33:28.848201 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:28.862706 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:28.872753 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:28.882437 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:28.882467 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:28.882527 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:28.892810 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:28.892893 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:28.901944 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:28.911008 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:28.911089 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:28.920446 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.929557 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:28.929627 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.939095 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:28.948405 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:28.948478 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:28.958084 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:29.033876 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:33:29.033969 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:29.180061 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:29.180208 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:29.180304 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:29.352063 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:29.354698 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:29.354847 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:29.354944 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:29.355065 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:29.355151 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:29.355244 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:29.355344 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:29.355454 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:29.355562 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:29.355675 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:29.355800 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:29.355855 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:29.355906 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:29.657622 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:29.951029 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:30.025514 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:30.502515 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:30.518575 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:30.520148 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:30.520332 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:30.670223 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:27.948560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.442457 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:28.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.139394 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.672807 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:33:30.672945 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:30.681152 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:30.682190 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:30.683416 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:30.688543 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:33:32.942316 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.443021 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:32.639784 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.139844 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.945781 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.442632 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.639625 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.139364 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.942420 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.942739 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.139763 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.639285 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:46.943777 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.442396 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:47.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.139244 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:51.139970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.946266 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.186759545s)
	I0731 21:33:53.946372 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:53.960849 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:53.971957 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:53.981956 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:53.981997 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:53.982061 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:53.991700 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:53.991794 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:54.001558 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:54.010863 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:54.010939 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:54.021132 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.032655 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:54.032745 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.042684 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:54.052522 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:54.052591 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:54.062401 1147232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:54.110034 1147232 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:33:54.110111 1147232 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:54.241728 1147232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:54.241910 1147232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:54.242057 1147232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:54.453017 1147232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:54.454705 1147232 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:54.454822 1147232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:54.459233 1147232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:54.459344 1147232 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:54.459427 1147232 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:54.459525 1147232 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:54.459612 1147232 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:54.459698 1147232 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:54.459807 1147232 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:54.459918 1147232 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:54.460026 1147232 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:54.460083 1147232 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:54.460190 1147232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:54.524149 1147232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:54.777800 1147232 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:33:54.921782 1147232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:55.044166 1147232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:55.204096 1147232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:55.204767 1147232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:55.207263 1147232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:51.442995 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.444424 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.944751 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.639209 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.639317 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.208851 1147232 out.go:204]   - Booting up control plane ...
	I0731 21:33:55.208977 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:55.209090 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:55.209331 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:55.229113 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:55.229800 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:55.229867 1147232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:55.356937 1147232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:33:55.357076 1147232 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:33:55.858979 1147232 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.083488ms
	I0731 21:33:55.859109 1147232 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:34:00.863345 1147232 kubeadm.go:310] [api-check] The API server is healthy after 5.002699171s
	I0731 21:34:00.879484 1147232 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:34:00.894019 1147232 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:34:00.928443 1147232 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:34:00.928739 1147232 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-563652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:34:00.941793 1147232 kubeadm.go:310] [bootstrap-token] Using token: zsizu4.9crnq3d9xqkkbhr5
	I0731 21:33:57.947020 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.442694 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:57.639666 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:59.640630 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:01.640684 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.943202 1147232 out.go:204]   - Configuring RBAC rules ...
	I0731 21:34:00.943358 1147232 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:34:00.951121 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:34:00.959955 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:34:00.963669 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:34:00.967795 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:34:00.972804 1147232 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:34:01.271139 1147232 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:34:01.705953 1147232 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:34:02.269466 1147232 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:34:02.271800 1147232 kubeadm.go:310] 
	I0731 21:34:02.271904 1147232 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:34:02.271915 1147232 kubeadm.go:310] 
	I0731 21:34:02.271994 1147232 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:34:02.272005 1147232 kubeadm.go:310] 
	I0731 21:34:02.272040 1147232 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:34:02.272127 1147232 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:34:02.272206 1147232 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:34:02.272212 1147232 kubeadm.go:310] 
	I0731 21:34:02.272290 1147232 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:34:02.272337 1147232 kubeadm.go:310] 
	I0731 21:34:02.272453 1147232 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:34:02.272477 1147232 kubeadm.go:310] 
	I0731 21:34:02.272557 1147232 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:34:02.272644 1147232 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:34:02.272735 1147232 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:34:02.272751 1147232 kubeadm.go:310] 
	I0731 21:34:02.272871 1147232 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:34:02.272972 1147232 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:34:02.272991 1147232 kubeadm.go:310] 
	I0731 21:34:02.273097 1147232 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273207 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:34:02.273252 1147232 kubeadm.go:310] 	--control-plane 
	I0731 21:34:02.273268 1147232 kubeadm.go:310] 
	I0731 21:34:02.273371 1147232 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:34:02.273381 1147232 kubeadm.go:310] 
	I0731 21:34:02.273492 1147232 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273643 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:34:02.274138 1147232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:34:02.274200 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:34:02.274221 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:34:02.275876 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:34:02.277208 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:34:02.287526 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:34:02.306070 1147232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:34:02.306192 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.306218 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-563652 minikube.k8s.io/updated_at=2024_07_31T21_34_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=embed-certs-563652 minikube.k8s.io/primary=true
	I0731 21:34:02.530554 1147232 ops.go:34] apiserver oom_adj: -16
	I0731 21:34:02.530710 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.031525 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.530812 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:04.030780 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.444274 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.443668 1148013 pod_ready.go:81] duration metric: took 4m0.00729593s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:04.443701 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:04.443712 1148013 pod_ready.go:38] duration metric: took 4m3.607055366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:04.443731 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:04.443795 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:04.443885 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:04.483174 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:04.483203 1148013 cri.go:89] found id: ""
	I0731 21:34:04.483212 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:04.483265 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.488570 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:04.488660 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:04.523705 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:04.523734 1148013 cri.go:89] found id: ""
	I0731 21:34:04.523745 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:04.523816 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.528231 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:04.528304 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:04.565303 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:04.565332 1148013 cri.go:89] found id: ""
	I0731 21:34:04.565341 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:04.565394 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.570089 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:04.570172 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:04.604648 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:04.604676 1148013 cri.go:89] found id: ""
	I0731 21:34:04.604686 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:04.604770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.609219 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:04.609306 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:04.644851 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.644876 1148013 cri.go:89] found id: ""
	I0731 21:34:04.644887 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:04.644954 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.649760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:04.649859 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:04.686438 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:04.686466 1148013 cri.go:89] found id: ""
	I0731 21:34:04.686477 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:04.686546 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.690707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:04.690791 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:04.726245 1148013 cri.go:89] found id: ""
	I0731 21:34:04.726276 1148013 logs.go:276] 0 containers: []
	W0731 21:34:04.726284 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:04.726291 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:04.726346 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:04.766009 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:04.766034 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.766038 1148013 cri.go:89] found id: ""
	I0731 21:34:04.766045 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:04.766105 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.770130 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.774449 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:04.774479 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.822626 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:04.822660 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.857618 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:04.857648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:04.908962 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:04.908993 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:04.962708 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:04.962759 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:04.977232 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:04.977271 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:05.109227 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:05.109264 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:05.163213 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:05.163250 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:05.200524 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:05.200564 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:05.242464 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:05.242501 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:05.278233 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:05.278263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:05.328930 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:05.328975 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:05.367827 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:05.367860 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:04.140237 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:06.641725 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.531795 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.030854 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.530821 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.031777 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.531171 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.030885 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.531555 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.031798 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.531512 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:09.031778 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.349628 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:08.364164 1148013 api_server.go:72] duration metric: took 4m15.266433533s to wait for apiserver process to appear ...
	I0731 21:34:08.364205 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:08.364257 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:08.364321 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:08.398165 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:08.398194 1148013 cri.go:89] found id: ""
	I0731 21:34:08.398205 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:08.398270 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.402707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:08.402780 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:08.444972 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:08.444998 1148013 cri.go:89] found id: ""
	I0731 21:34:08.445007 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:08.445067 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.449385 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:08.449458 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:08.487006 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:08.487040 1148013 cri.go:89] found id: ""
	I0731 21:34:08.487053 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:08.487123 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.491544 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:08.491618 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:08.526239 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:08.526271 1148013 cri.go:89] found id: ""
	I0731 21:34:08.526282 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:08.526334 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.530760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:08.530864 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:08.579799 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:08.579829 1148013 cri.go:89] found id: ""
	I0731 21:34:08.579844 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:08.579910 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.584172 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:08.584244 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:08.624614 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.624689 1148013 cri.go:89] found id: ""
	I0731 21:34:08.624703 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:08.624770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.629264 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:08.629340 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:08.669137 1148013 cri.go:89] found id: ""
	I0731 21:34:08.669170 1148013 logs.go:276] 0 containers: []
	W0731 21:34:08.669181 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:08.669189 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:08.669256 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:08.712145 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.712174 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:08.712179 1148013 cri.go:89] found id: ""
	I0731 21:34:08.712187 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:08.712246 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.717005 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.720992 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:08.721024 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.775824 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:08.775876 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.822904 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:08.822940 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:09.279585 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:09.279641 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:09.328597 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:09.328635 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:09.382901 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:09.382959 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:09.397461 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:09.397500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:09.437452 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:09.437494 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:09.472580 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:09.472614 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:09.512902 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:09.512938 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:09.558351 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:09.558394 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:09.669960 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:09.670001 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:09.714731 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:09.714770 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:09.140243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:11.639122 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:09.531101 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.031417 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.531369 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.031687 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.530902 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.030877 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.531359 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.030850 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.530829 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.030737 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.137727 1147232 kubeadm.go:1113] duration metric: took 11.831600904s to wait for elevateKubeSystemPrivileges
	I0731 21:34:14.137775 1147232 kubeadm.go:394] duration metric: took 5m10.826279216s to StartCluster
	I0731 21:34:14.137810 1147232 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.137941 1147232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:34:14.140680 1147232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.141051 1147232 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:34:14.141091 1147232 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:34:14.141199 1147232 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-563652"
	I0731 21:34:14.141240 1147232 addons.go:69] Setting default-storageclass=true in profile "embed-certs-563652"
	I0731 21:34:14.141263 1147232 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-563652"
	W0731 21:34:14.141272 1147232 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:34:14.141291 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:34:14.141302 1147232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-563652"
	I0731 21:34:14.141309 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141337 1147232 addons.go:69] Setting metrics-server=true in profile "embed-certs-563652"
	I0731 21:34:14.141362 1147232 addons.go:234] Setting addon metrics-server=true in "embed-certs-563652"
	W0731 21:34:14.141373 1147232 addons.go:243] addon metrics-server should already be in state true
	I0731 21:34:14.141400 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141735 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141802 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141745 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141876 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141748 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.142070 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.143403 1147232 out.go:177] * Verifying Kubernetes components...
	I0731 21:34:14.144894 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:34:14.160359 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0731 21:34:14.160405 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0731 21:34:14.160631 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0731 21:34:14.160893 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.160996 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161048 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161478 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161497 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161643 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161657 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161721 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161749 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.162028 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162069 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162029 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162250 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.162557 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162596 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.162654 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162675 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.166106 1147232 addons.go:234] Setting addon default-storageclass=true in "embed-certs-563652"
	W0731 21:34:14.166129 1147232 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:34:14.166153 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.166426 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.166463 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.179941 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I0731 21:34:14.180522 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.181056 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.181077 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.181522 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.181726 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.182994 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0731 21:34:14.183599 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.183753 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.183958 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0731 21:34:14.184127 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.184145 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.184538 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.184645 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185028 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.185047 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.185306 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.185343 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.185458 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185527 1147232 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:34:14.185650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.186884 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:34:14.186912 1147232 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:34:14.186937 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.187442 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.189035 1147232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:34:14.190019 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190617 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.190644 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190680 1147232 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.190700 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:34:14.190725 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.191369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.191607 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.191893 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.192265 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.194023 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194383 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.194407 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.194852 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.195073 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.195233 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.207044 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0731 21:34:14.207748 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.208292 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.208319 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.208759 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.208962 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.210554 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.210881 1147232 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.210902 1147232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:34:14.210925 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.214212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214803 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.215026 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214918 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.216141 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.216369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.216583 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.360826 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:34:14.379220 1147232 node_ready.go:35] waiting up to 6m0s for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387294 1147232 node_ready.go:49] node "embed-certs-563652" has status "Ready":"True"
	I0731 21:34:14.387331 1147232 node_ready.go:38] duration metric: took 8.073597ms for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387344 1147232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.392589 1147232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400252 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.400276 1147232 pod_ready.go:81] duration metric: took 7.654503ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400285 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405540 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.405564 1147232 pod_ready.go:81] duration metric: took 5.273822ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405573 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410097 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.410118 1147232 pod_ready.go:81] duration metric: took 4.539492ms for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410127 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414070 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.414094 1147232 pod_ready.go:81] duration metric: took 3.961128ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414101 1147232 pod_ready.go:38] duration metric: took 26.744925ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.414117 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:14.414166 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:14.427922 1147232 api_server.go:72] duration metric: took 286.820645ms to wait for apiserver process to appear ...
	I0731 21:34:14.427955 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:14.427976 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:34:14.433697 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:34:14.435062 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:14.435088 1147232 api_server.go:131] duration metric: took 7.125728ms to wait for apiserver health ...
	I0731 21:34:14.435096 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:10.689650 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:34:10.690301 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:10.690529 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:14.497864 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.523526 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:34:14.523560 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:34:14.523656 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.552390 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:34:14.552424 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:34:14.586389 1147232 system_pods.go:59] 4 kube-system pods found
	I0731 21:34:14.586421 1147232 system_pods.go:61] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.586426 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.586429 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.586433 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.586440 1147232 system_pods.go:74] duration metric: took 151.337561ms to wait for pod list to return data ...
	I0731 21:34:14.586448 1147232 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:14.613255 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.613292 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:34:14.677966 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.728484 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.728522 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.728906 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.728971 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.728992 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729005 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.729016 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.729280 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.729300 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729315 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736315 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.736340 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.736605 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736611 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.736628 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.783127 1147232 default_sa.go:45] found service account: "default"
	I0731 21:34:14.783169 1147232 default_sa.go:55] duration metric: took 196.713133ms for default service account to be created ...
	I0731 21:34:14.783181 1147232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:14.998421 1147232 system_pods.go:86] 5 kube-system pods found
	I0731 21:34:14.998459 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.998467 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.998476 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.998483 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending
	I0731 21:34:14.998488 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.998528 1147232 retry.go:31] will retry after 204.720881ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.239227 1147232 system_pods.go:86] 7 kube-system pods found
	I0731 21:34:15.239260 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending
	I0731 21:34:15.239268 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.239275 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.239281 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.239285 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.239291 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.239295 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.239316 1147232 retry.go:31] will retry after 274.032375ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.470562 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.470596 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.470970 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471046 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471059 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.471070 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.471082 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.471345 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471384 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471395 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.530409 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.530454 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530467 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530475 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.530483 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.530493 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.530501 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.530510 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.530540 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.530554 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.530575 1147232 retry.go:31] will retry after 306.456007ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.572796 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.572829 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573170 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573210 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573232 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.573245 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573542 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.573591 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573612 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573631 1147232 addons.go:475] Verifying addon metrics-server=true in "embed-certs-563652"
	I0731 21:34:15.576124 1147232 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0731 21:34:12.254258 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:34:12.259093 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:34:12.260261 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:12.260290 1148013 api_server.go:131] duration metric: took 3.896077544s to wait for apiserver health ...
	I0731 21:34:12.260299 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:12.260325 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:12.260383 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:12.302317 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.302350 1148013 cri.go:89] found id: ""
	I0731 21:34:12.302361 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:12.302435 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.306733 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:12.306821 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:12.342694 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:12.342719 1148013 cri.go:89] found id: ""
	I0731 21:34:12.342728 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:12.342788 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.346762 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:12.346848 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:12.382747 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.382772 1148013 cri.go:89] found id: ""
	I0731 21:34:12.382782 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:12.382851 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.386891 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:12.386988 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:12.424735 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.424768 1148013 cri.go:89] found id: ""
	I0731 21:34:12.424777 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:12.424842 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.430109 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:12.430193 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:12.466432 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.466457 1148013 cri.go:89] found id: ""
	I0731 21:34:12.466464 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:12.466520 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.470677 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:12.470761 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:12.509821 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:12.509847 1148013 cri.go:89] found id: ""
	I0731 21:34:12.509858 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:12.509926 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.514114 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:12.514199 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:12.560780 1148013 cri.go:89] found id: ""
	I0731 21:34:12.560810 1148013 logs.go:276] 0 containers: []
	W0731 21:34:12.560831 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:12.560841 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:12.560911 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:12.611528 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:12.611560 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.611566 1148013 cri.go:89] found id: ""
	I0731 21:34:12.611575 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:12.611643 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.615972 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.620046 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:12.620072 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:12.733715 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:12.733761 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.785864 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:12.785915 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.829467 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:12.829510 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.867566 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:12.867599 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.908038 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:12.908073 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.945425 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:12.945471 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:12.994911 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:12.994948 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:13.061451 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:13.061500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:13.107896 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:13.107947 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:13.164585 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:13.164627 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:13.206615 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:13.206648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:13.587405 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:13.587453 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:16.108951 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:16.108985 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.108990 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.108994 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.108998 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.109001 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.109004 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.109010 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.109015 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.109023 1148013 system_pods.go:74] duration metric: took 3.848717497s to wait for pod list to return data ...
	I0731 21:34:16.109031 1148013 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:16.112076 1148013 default_sa.go:45] found service account: "default"
	I0731 21:34:16.112124 1148013 default_sa.go:55] duration metric: took 3.083038ms for default service account to be created ...
	I0731 21:34:16.112135 1148013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:16.118191 1148013 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:16.118232 1148013 system_pods.go:89] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.118242 1148013 system_pods.go:89] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.118250 1148013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.118256 1148013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.118263 1148013 system_pods.go:89] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.118269 1148013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.118303 1148013 system_pods.go:89] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.118321 1148013 system_pods.go:89] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.118333 1148013 system_pods.go:126] duration metric: took 6.190349ms to wait for k8s-apps to be running ...
	I0731 21:34:16.118344 1148013 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.118404 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.137723 1148013 system_svc.go:56] duration metric: took 19.365234ms WaitForService to wait for kubelet
	I0731 21:34:16.137753 1148013 kubeadm.go:582] duration metric: took 4m23.040028763s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.137781 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.141708 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.141737 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.141805 1148013 node_conditions.go:105] duration metric: took 4.017229ms to run NodePressure ...
	I0731 21:34:16.141831 1148013 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.141849 1148013 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.141868 1148013 start.go:255] writing updated cluster config ...
	I0731 21:34:16.142163 1148013 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.203520 1148013 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.205072 1148013 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-755535" cluster and "default" namespace by default
	I0731 21:34:13.639431 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.640300 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.577285 1147232 addons.go:510] duration metric: took 1.436190545s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0731 21:34:15.848446 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.848480 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848487 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848496 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.848502 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.848507 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.848512 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.848516 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.848522 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.848527 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.848545 1147232 retry.go:31] will retry after 538.9255ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.397869 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.397924 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397937 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397946 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.397954 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.397962 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.397972 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:16.397979 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.397989 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.398003 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:16.398152 1147232 retry.go:31] will retry after 511.77725ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.917181 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.917219 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Running
	I0731 21:34:16.917228 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Running
	I0731 21:34:16.917234 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.917240 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.917248 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.917256 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Running
	I0731 21:34:16.917261 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.917272 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.917279 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Running
	I0731 21:34:16.917295 1147232 system_pods.go:126] duration metric: took 2.134102549s to wait for k8s-apps to be running ...
	I0731 21:34:16.917310 1147232 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.917365 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.932647 1147232 system_svc.go:56] duration metric: took 15.322111ms WaitForService to wait for kubelet
	I0731 21:34:16.932702 1147232 kubeadm.go:582] duration metric: took 2.791596331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.932730 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.935567 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.935589 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.935600 1147232 node_conditions.go:105] duration metric: took 2.864432ms to run NodePressure ...
	I0731 21:34:16.935614 1147232 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.935621 1147232 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.935631 1147232 start.go:255] writing updated cluster config ...
	I0731 21:34:16.935948 1147232 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.990670 1147232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.992682 1147232 out.go:177] * Done! kubectl is now configured to use "embed-certs-563652" cluster and "default" namespace by default
	I0731 21:34:15.690878 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:15.691156 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:18.139818 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:20.639113 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:23.140314 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.641086 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.691455 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:25.691639 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:28.139044 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:30.140499 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:32.640931 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:35.139207 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:36.640291 1146656 pod_ready.go:81] duration metric: took 4m0.007535985s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:36.640323 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:36.640334 1146656 pod_ready.go:38] duration metric: took 4m7.419160814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:36.640354 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:36.640393 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:36.640454 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:36.688629 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:36.688658 1146656 cri.go:89] found id: ""
	I0731 21:34:36.688668 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:36.688747 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.693261 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:36.693349 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:36.730997 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:36.731021 1146656 cri.go:89] found id: ""
	I0731 21:34:36.731028 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:36.731079 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.737624 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:36.737692 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:36.780734 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:36.780758 1146656 cri.go:89] found id: ""
	I0731 21:34:36.780769 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:36.780831 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.784767 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:36.784839 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:36.824129 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:36.824164 1146656 cri.go:89] found id: ""
	I0731 21:34:36.824174 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:36.824246 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.828299 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:36.828380 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:36.863976 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:36.864008 1146656 cri.go:89] found id: ""
	I0731 21:34:36.864017 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:36.864081 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.868516 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:36.868594 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:36.903106 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:36.903137 1146656 cri.go:89] found id: ""
	I0731 21:34:36.903148 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:36.903212 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.907260 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:36.907327 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:36.943921 1146656 cri.go:89] found id: ""
	I0731 21:34:36.943955 1146656 logs.go:276] 0 containers: []
	W0731 21:34:36.943963 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:36.943969 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:36.944025 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:36.979295 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:36.979327 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:36.979334 1146656 cri.go:89] found id: ""
	I0731 21:34:36.979345 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:36.979403 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.984464 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.988471 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:36.988511 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:37.121952 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:37.121995 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:37.169494 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:37.169546 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:37.205544 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:37.205577 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:37.255892 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:37.255930 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:37.292002 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:37.292036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:37.327852 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:37.327881 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:37.367753 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:37.367795 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:37.419399 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:37.419443 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:37.432894 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:37.432938 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:37.474408 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:37.474454 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:37.508203 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:37.508246 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:37.550030 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:37.550072 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:40.551728 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:40.566959 1146656 api_server.go:72] duration metric: took 4m19.080511832s to wait for apiserver process to appear ...
	I0731 21:34:40.567027 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:40.567085 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:40.567153 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:40.617492 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:40.617529 1146656 cri.go:89] found id: ""
	I0731 21:34:40.617539 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:40.617605 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.621950 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:40.622023 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:40.664964 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:40.664990 1146656 cri.go:89] found id: ""
	I0731 21:34:40.664998 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:40.665052 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.669257 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:40.669353 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:40.705806 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:40.705842 1146656 cri.go:89] found id: ""
	I0731 21:34:40.705854 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:40.705920 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.710069 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:40.710146 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:40.746331 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:40.746358 1146656 cri.go:89] found id: ""
	I0731 21:34:40.746368 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:40.746420 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.754270 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:40.754364 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:40.791320 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:40.791356 1146656 cri.go:89] found id: ""
	I0731 21:34:40.791367 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:40.791435 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.795691 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:40.795773 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:40.835548 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:40.835578 1146656 cri.go:89] found id: ""
	I0731 21:34:40.835589 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:40.835652 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.839854 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:40.839939 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:40.874322 1146656 cri.go:89] found id: ""
	I0731 21:34:40.874358 1146656 logs.go:276] 0 containers: []
	W0731 21:34:40.874369 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:40.874379 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:40.874448 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:40.922665 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:40.922691 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.922695 1146656 cri.go:89] found id: ""
	I0731 21:34:40.922703 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:40.922762 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.926750 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.930612 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:40.930640 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.966656 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:40.966695 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:41.401560 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:41.401622 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:41.503991 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:41.504036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:41.552765 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:41.552816 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:41.588315 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:41.588353 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:41.639790 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:41.639832 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:41.679851 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:41.679891 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:41.716182 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:41.716219 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:41.762445 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:41.762493 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:41.815762 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:41.815810 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:41.829753 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:41.829794 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:41.874703 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:41.874745 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.415559 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:34:44.420498 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:34:44.421648 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:34:44.421678 1146656 api_server.go:131] duration metric: took 3.854640091s to wait for apiserver health ...
	I0731 21:34:44.421690 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:44.421724 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:44.421786 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:44.456716 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.456744 1146656 cri.go:89] found id: ""
	I0731 21:34:44.456755 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:44.456809 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.460762 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:44.460836 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:44.498325 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.498352 1146656 cri.go:89] found id: ""
	I0731 21:34:44.498361 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:44.498416 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.502344 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:44.502424 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:44.538766 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.538799 1146656 cri.go:89] found id: ""
	I0731 21:34:44.538809 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:44.538874 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.542853 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:44.542946 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:44.578142 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:44.578175 1146656 cri.go:89] found id: ""
	I0731 21:34:44.578185 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:44.578241 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.582494 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:44.582574 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:44.631110 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:44.631141 1146656 cri.go:89] found id: ""
	I0731 21:34:44.631149 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:44.631208 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.635618 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:44.635693 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:44.669607 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:44.669633 1146656 cri.go:89] found id: ""
	I0731 21:34:44.669643 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:44.669702 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.673967 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:44.674052 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:44.723388 1146656 cri.go:89] found id: ""
	I0731 21:34:44.723417 1146656 logs.go:276] 0 containers: []
	W0731 21:34:44.723426 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:44.723433 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:44.723485 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:44.759398 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:44.759423 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:44.759429 1146656 cri.go:89] found id: ""
	I0731 21:34:44.759438 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:44.759506 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.765787 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.769602 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:44.769627 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:44.783608 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:44.783646 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:44.897376 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:44.897415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.941518 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:44.941558 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.976285 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:44.976319 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:45.015310 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:45.015343 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:45.076253 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:45.076298 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:45.114621 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:45.114656 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:45.171369 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:45.171415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:45.219450 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:45.219492 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:45.254864 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:45.254901 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:45.289962 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:45.289999 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:45.660050 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:45.660113 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:48.211383 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:48.211418 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.211423 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.211427 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.211431 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.211435 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.211440 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.211449 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.211456 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.211467 1146656 system_pods.go:74] duration metric: took 3.789769058s to wait for pod list to return data ...
	I0731 21:34:48.211490 1146656 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:48.214462 1146656 default_sa.go:45] found service account: "default"
	I0731 21:34:48.214492 1146656 default_sa.go:55] duration metric: took 2.992385ms for default service account to be created ...
	I0731 21:34:48.214501 1146656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:48.220257 1146656 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:48.220289 1146656 system_pods.go:89] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.220295 1146656 system_pods.go:89] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.220299 1146656 system_pods.go:89] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.220304 1146656 system_pods.go:89] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.220309 1146656 system_pods.go:89] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.220313 1146656 system_pods.go:89] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.220322 1146656 system_pods.go:89] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.220328 1146656 system_pods.go:89] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.220339 1146656 system_pods.go:126] duration metric: took 5.831037ms to wait for k8s-apps to be running ...
	I0731 21:34:48.220352 1146656 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:48.220404 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:48.235707 1146656 system_svc.go:56] duration metric: took 15.341391ms WaitForService to wait for kubelet
	I0731 21:34:48.235747 1146656 kubeadm.go:582] duration metric: took 4m26.749308267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:48.235769 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:48.239352 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:48.239377 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:48.239388 1146656 node_conditions.go:105] duration metric: took 3.614275ms to run NodePressure ...
	I0731 21:34:48.239400 1146656 start.go:241] waiting for startup goroutines ...
	I0731 21:34:48.239407 1146656 start.go:246] waiting for cluster config update ...
	I0731 21:34:48.239418 1146656 start.go:255] writing updated cluster config ...
	I0731 21:34:48.239724 1146656 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:48.291567 1146656 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:34:48.293377 1146656 out.go:177] * Done! kubectl is now configured to use "no-preload-018891" cluster and "default" namespace by default
	I0731 21:34:45.692895 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:45.693194 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695071 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:35:25.695336 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695369 1147424 kubeadm.go:310] 
	I0731 21:35:25.695432 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:35:25.695496 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:35:25.695506 1147424 kubeadm.go:310] 
	I0731 21:35:25.695560 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:35:25.695606 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:35:25.695752 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:35:25.695775 1147424 kubeadm.go:310] 
	I0731 21:35:25.695866 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:35:25.695914 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:35:25.695965 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:35:25.695972 1147424 kubeadm.go:310] 
	I0731 21:35:25.696064 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:35:25.696197 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:35:25.696218 1147424 kubeadm.go:310] 
	I0731 21:35:25.696389 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:35:25.696510 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:35:25.696637 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:35:25.696739 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:35:25.696761 1147424 kubeadm.go:310] 
	I0731 21:35:25.697342 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:35:25.697447 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:35:25.697582 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:35:25.697782 1147424 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:35:25.697852 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:35:31.094319 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.396429611s)
	I0731 21:35:31.094410 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:35:31.109019 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:35:31.118415 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:35:31.118447 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:35:31.118512 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:35:31.129005 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:35:31.129097 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:35:31.139701 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:35:31.149483 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:35:31.149565 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:35:31.158699 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.168151 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:35:31.168225 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.177911 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:35:31.186739 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:35:31.186821 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:35:31.196779 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:35:31.410613 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:37:27.101986 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:37:27.102135 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:37:27.103680 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:37:27.103742 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:37:27.103874 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:37:27.103971 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:37:27.104056 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:37:27.104135 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:37:27.105757 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:37:27.105851 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:37:27.105911 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:37:27.105982 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:37:27.106047 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:37:27.106126 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:37:27.106185 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:37:27.106256 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:37:27.106340 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:37:27.106446 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:37:27.106527 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:37:27.106582 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:37:27.106669 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:37:27.106747 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:37:27.106800 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:37:27.106853 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:37:27.106928 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:37:27.107053 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:37:27.107169 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:37:27.107233 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:37:27.107307 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:37:27.108810 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:37:27.108897 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:37:27.108964 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:37:27.109022 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:37:27.109090 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:37:27.109227 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:37:27.109276 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:37:27.109346 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109569 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109655 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109876 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109947 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110108 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110172 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110334 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110393 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110549 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110556 1147424 kubeadm.go:310] 
	I0731 21:37:27.110589 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:37:27.110626 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:37:27.110632 1147424 kubeadm.go:310] 
	I0731 21:37:27.110661 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:37:27.110707 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:37:27.110804 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:37:27.110816 1147424 kubeadm.go:310] 
	I0731 21:37:27.110920 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:37:27.110965 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:37:27.110999 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:37:27.111006 1147424 kubeadm.go:310] 
	I0731 21:37:27.111099 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:37:27.111173 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:37:27.111181 1147424 kubeadm.go:310] 
	I0731 21:37:27.111284 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:37:27.111357 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:37:27.111421 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:37:27.111501 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:37:27.111545 1147424 kubeadm.go:310] 
	I0731 21:37:27.111591 1147424 kubeadm.go:394] duration metric: took 8m1.593977042s to StartCluster
	I0731 21:37:27.111642 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:37:27.111732 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:37:27.151036 1147424 cri.go:89] found id: ""
	I0731 21:37:27.151080 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.151092 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:37:27.151101 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:37:27.151164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:37:27.189839 1147424 cri.go:89] found id: ""
	I0731 21:37:27.189877 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.189897 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:37:27.189906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:37:27.189975 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:37:27.224515 1147424 cri.go:89] found id: ""
	I0731 21:37:27.224553 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.224566 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:37:27.224574 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:37:27.224637 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:37:27.256890 1147424 cri.go:89] found id: ""
	I0731 21:37:27.256927 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.256939 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:37:27.256948 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:37:27.257017 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:37:27.292320 1147424 cri.go:89] found id: ""
	I0731 21:37:27.292360 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.292373 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:37:27.292380 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:37:27.292448 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:37:27.327537 1147424 cri.go:89] found id: ""
	I0731 21:37:27.327580 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.327591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:37:27.327600 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:37:27.327669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:37:27.362489 1147424 cri.go:89] found id: ""
	I0731 21:37:27.362522 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.362533 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:37:27.362541 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:37:27.362612 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:37:27.398531 1147424 cri.go:89] found id: ""
	I0731 21:37:27.398575 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.398587 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:37:27.398605 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:37:27.398625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:37:27.412082 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:37:27.412129 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:37:27.485574 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:37:27.485598 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:37:27.485615 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:37:27.602979 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:37:27.603026 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:37:27.642075 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:37:27.642108 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 21:37:27.692811 1147424 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:37:27.692868 1147424 out.go:239] * 
	W0731 21:37:27.692944 1147424 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.692968 1147424 out.go:239] * 
	W0731 21:37:27.693763 1147424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:37:27.697049 1147424 out.go:177] 
	W0731 21:37:27.698454 1147424 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.698525 1147424 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:37:27.698564 1147424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:37:27.700008 1147424 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.363389946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=138400bc-cab9-4fd2-b596-2c185143a640 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.364408352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad798ce4-3bbf-43a2-975c-3abad2dab204 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.365016300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462199364995252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad798ce4-3bbf-43a2-975c-3abad2dab204 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.365803276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78cd860d-86dd-4982-9d3a-c773b488f871 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.365915872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78cd860d-86dd-4982-9d3a-c773b488f871 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.366133522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb,PodSandboxId:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461656136800214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6b5594,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17,PodSandboxId:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655952606832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,},Annotations:map[string]string{io.kubernetes.container.hash: 504bf332,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2,PodSandboxId:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655932295585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
d09813a-38fd-4620-8b89-67dbf0ba4173,},Annotations:map[string]string{io.kubernetes.container.hash: ab505c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e,PodSandboxId:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722461655516927416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,},Annotations:map[string]string{io.kubernetes.container.hash: b7c71f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4,PodSandboxId:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461636282419435,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5,PodSandboxId:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461636305925164,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,},Annotations:map[string]string{io.kubernetes.container.hash: e4f6e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56,PodSandboxId:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461636261475801,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968,PodSandboxId:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461636216550094,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,},Annotations:map[string]string{io.kubernetes.container.hash: 62b0085b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78cd860d-86dd-4982-9d3a-c773b488f871 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.394173138Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4a2089b8-add4-4236-81c6-5ec57919e3b2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.394425364Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655741457467,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T21:34:15.433703669Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:462d3b8efcfa334fbd404f88d8bbe805ff769e8daaf14728c60c8dfcc85619fd,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-7fxm2,Uid:2651e359-a15a-4958-a9bb-9080efbd6943,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655608901551,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-7fxm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2651e359-a15a-4958-a9bb-9080efbd694
3,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:34:15.300802392Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-h54vh,Uid:fd09813a-38fd-4620-8b89-67dbf0ba4173,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655424951666,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd09813a-38fd-4620-8b89-67dbf0ba4173,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:34:15.114705517Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-h6wll,Uid:16a3c2ad-faff-49cf
-8a56-d36681b771c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655419236188,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:34:15.106765835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&PodSandboxMetadata{Name:kube-proxy-j6jnw,Uid:8e59f643-6f37-4f5e-a862-89a39008af1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655222103716,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:34:14.915683399Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-563652,Uid:f017abb8a101cece6e5cd8ce971a8ba6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461636093016352,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f017abb8a101cece6e5cd8ce971a8ba6,kubernetes.io/config.seen: 2024-07-31T21:33:55.635401348Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-563652,Uid:464315ef6a164806305dfc2e66305983,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461636077989854,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 464315ef6a164806305dfc2e66305983,kubernetes.io/config.seen: 2024-07-31T21:33:55.635400261Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-563652,Uid:162d0f3c9978bf8fc52c13a660e67af3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461636076708358,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.203:8443,kubernetes.io/config.hash: 162d0f3c9978bf8fc52c13a660e67af3,kubernetes.io/config.seen: 2024-07-31T21:33:55.635398594Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-563652,Uid:5d2dcaf67c9531013a82e502bb415293,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461636076069265,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.5
0.203:2379,kubernetes.io/config.hash: 5d2dcaf67c9531013a82e502bb415293,kubernetes.io/config.seen: 2024-07-31T21:33:55.635394478Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4a2089b8-add4-4236-81c6-5ec57919e3b2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.395110650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34781331-e975-4cf7-9989-906dfb884b11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.395182294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34781331-e975-4cf7-9989-906dfb884b11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.395361606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb,PodSandboxId:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461656136800214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6b5594,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17,PodSandboxId:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655952606832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,},Annotations:map[string]string{io.kubernetes.container.hash: 504bf332,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2,PodSandboxId:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655932295585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
d09813a-38fd-4620-8b89-67dbf0ba4173,},Annotations:map[string]string{io.kubernetes.container.hash: ab505c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e,PodSandboxId:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722461655516927416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,},Annotations:map[string]string{io.kubernetes.container.hash: b7c71f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4,PodSandboxId:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461636282419435,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5,PodSandboxId:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461636305925164,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,},Annotations:map[string]string{io.kubernetes.container.hash: e4f6e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56,PodSandboxId:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461636261475801,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968,PodSandboxId:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461636216550094,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,},Annotations:map[string]string{io.kubernetes.container.hash: 62b0085b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34781331-e975-4cf7-9989-906dfb884b11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.407781106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=539497f3-eecc-4abc-8606-92567ba366e4 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.407873944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=539497f3-eecc-4abc-8606-92567ba366e4 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.409206068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3e7fccd-500d-46e1-b4d0-c28e2aa59082 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.409649974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462199409614059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3e7fccd-500d-46e1-b4d0-c28e2aa59082 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.410503098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6c63c5f-51e2-49a8-9925-7eea8ac232cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.410579948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6c63c5f-51e2-49a8-9925-7eea8ac232cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.410861775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb,PodSandboxId:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461656136800214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6b5594,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17,PodSandboxId:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655952606832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,},Annotations:map[string]string{io.kubernetes.container.hash: 504bf332,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2,PodSandboxId:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655932295585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
d09813a-38fd-4620-8b89-67dbf0ba4173,},Annotations:map[string]string{io.kubernetes.container.hash: ab505c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e,PodSandboxId:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722461655516927416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,},Annotations:map[string]string{io.kubernetes.container.hash: b7c71f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4,PodSandboxId:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461636282419435,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5,PodSandboxId:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461636305925164,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,},Annotations:map[string]string{io.kubernetes.container.hash: e4f6e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56,PodSandboxId:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461636261475801,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968,PodSandboxId:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461636216550094,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,},Annotations:map[string]string{io.kubernetes.container.hash: 62b0085b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6c63c5f-51e2-49a8-9925-7eea8ac232cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.446794896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f81572ca-0364-41f0-9b5c-a3c44e6866e6 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.446917421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f81572ca-0364-41f0-9b5c-a3c44e6866e6 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.448804401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=214ef937-42b6-45cd-94c6-04110f865763 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.449305531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462199449279199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=214ef937-42b6-45cd-94c6-04110f865763 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.449961437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f13d5d5b-3d54-4744-8fc8-4e8ac488b816 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.450023591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f13d5d5b-3d54-4744-8fc8-4e8ac488b816 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:19 embed-certs-563652 crio[723]: time="2024-07-31 21:43:19.450206379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb,PodSandboxId:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461656136800214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6b5594,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17,PodSandboxId:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655952606832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,},Annotations:map[string]string{io.kubernetes.container.hash: 504bf332,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2,PodSandboxId:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655932295585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
d09813a-38fd-4620-8b89-67dbf0ba4173,},Annotations:map[string]string{io.kubernetes.container.hash: ab505c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e,PodSandboxId:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722461655516927416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,},Annotations:map[string]string{io.kubernetes.container.hash: b7c71f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4,PodSandboxId:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461636282419435,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5,PodSandboxId:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461636305925164,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,},Annotations:map[string]string{io.kubernetes.container.hash: e4f6e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56,PodSandboxId:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461636261475801,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968,PodSandboxId:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461636216550094,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,},Annotations:map[string]string{io.kubernetes.container.hash: 62b0085b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f13d5d5b-3d54-4744-8fc8-4e8ac488b816 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	929e8e0237b9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6757bc5fa5813       storage-provisioner
	1917ee6e3264c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   094e2162eee08       coredns-7db6d8ff4d-h6wll
	ef7d9a266760a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8d6a86a376dea       coredns-7db6d8ff4d-h54vh
	32428cc9d80fa       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   f19f1de752add       kube-proxy-j6jnw
	6144e374d65e4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   08d21c82dbad9       etcd-embed-certs-563652
	8eaeda92ae420       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   ee26674ae0292       kube-scheduler-embed-certs-563652
	854524558aeaa       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   1f6693c12f028       kube-controller-manager-embed-certs-563652
	f06ea2f769524       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   1bacefb178d9a       kube-apiserver-embed-certs-563652
	
	
	==> coredns [1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-563652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-563652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=embed-certs-563652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_34_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:33:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-563652
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:43:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:39:27 +0000   Wed, 31 Jul 2024 21:33:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:39:27 +0000   Wed, 31 Jul 2024 21:33:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:39:27 +0000   Wed, 31 Jul 2024 21:33:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:39:27 +0000   Wed, 31 Jul 2024 21:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.203
	  Hostname:    embed-certs-563652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6b096e405244ed7a0b3856840b914ed
	  System UUID:                c6b096e4-0524-4ed7-a0b3-856840b914ed
	  Boot ID:                    7dd9ff6b-65f4-4768-9371-90df345781ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-h54vh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 coredns-7db6d8ff4d-h6wll                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 etcd-embed-certs-563652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-embed-certs-563652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-563652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-j6jnw                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-embed-certs-563652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-7fxm2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node embed-certs-563652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node embed-certs-563652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node embed-certs-563652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m5s   node-controller  Node embed-certs-563652 event: Registered Node embed-certs-563652 in Controller
	
	
	==> dmesg <==
	[  +0.047815] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036939] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.717730] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.922139] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.536373] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.803409] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.056483] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066342] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.166225] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.156031] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.317520] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[Jul31 21:29] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.067806] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.310897] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +6.110801] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.919884] kauditd_printk_skb: 89 callbacks suppressed
	[Jul31 21:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.798741] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	[  +4.455930] kauditd_printk_skb: 57 callbacks suppressed
	[Jul31 21:34] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[ +12.906073] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +0.113917] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 21:35] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5] <==
	{"level":"info","ts":"2024-07-31T21:33:56.611013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c switched to configuration voters=(11207490620396238892)"}
	{"level":"info","ts":"2024-07-31T21:33:56.611195Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2f9dfc9eaa0376c8","local-member-id":"9b890156e43d782c","added-peer-id":"9b890156e43d782c","added-peer-peer-urls":["https://192.168.50.203:2380"]}
	{"level":"info","ts":"2024-07-31T21:33:56.613559Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:33:56.613711Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.203:2380"}
	{"level":"info","ts":"2024-07-31T21:33:56.627644Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.203:2380"}
	{"level":"info","ts":"2024-07-31T21:33:56.629532Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9b890156e43d782c","initial-advertise-peer-urls":["https://192.168.50.203:2380"],"listen-peer-urls":["https://192.168.50.203:2380"],"advertise-client-urls":["https://192.168.50.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:33:56.63033Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:33:56.654739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T21:33:56.654791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T21:33:56.654813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c received MsgPreVoteResp from 9b890156e43d782c at term 1"}
	{"level":"info","ts":"2024-07-31T21:33:56.654824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:33:56.654829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c received MsgVoteResp from 9b890156e43d782c at term 2"}
	{"level":"info","ts":"2024-07-31T21:33:56.65484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became leader at term 2"}
	{"level":"info","ts":"2024-07-31T21:33:56.654846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b890156e43d782c elected leader 9b890156e43d782c at term 2"}
	{"level":"info","ts":"2024-07-31T21:33:56.658883Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9b890156e43d782c","local-member-attributes":"{Name:embed-certs-563652 ClientURLs:[https://192.168.50.203:2379]}","request-path":"/0/members/9b890156e43d782c/attributes","cluster-id":"2f9dfc9eaa0376c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:33:56.659034Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:33:56.659142Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:33:56.65957Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:33:56.674023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:33:56.670136Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:33:56.676143Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.203:2379"}
	{"level":"info","ts":"2024-07-31T21:33:56.686075Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:33:56.720777Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f9dfc9eaa0376c8","local-member-id":"9b890156e43d782c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:33:56.720967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:33:56.721029Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 21:43:19 up 14 min,  0 users,  load average: 0.08, 0.13, 0.09
	Linux embed-certs-563652 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968] <==
	I0731 21:37:16.185439       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:38:58.667870       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:38:58.667954       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 21:38:59.668136       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:38:59.668347       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:38:59.668385       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:38:59.668291       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:38:59.668499       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:38:59.669401       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:39:59.669190       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:39:59.669275       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:39:59.669285       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:39:59.670352       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:39:59.670471       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:39:59.670499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:41:59.669729       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:41:59.669788       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:41:59.669800       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:41:59.670796       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:41:59.670891       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:41:59.670917       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56] <==
	I0731 21:37:44.639869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:38:14.181496       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:38:14.646811       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:38:44.186271       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:38:44.653881       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:39:14.190989       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:39:14.660918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:39:44.195029       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:39:44.671164       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:39:57.566004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="985.856µs"
	I0731 21:40:08.564164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="79.143µs"
	E0731 21:40:14.200236       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:40:14.679136       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:40:44.204720       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:40:44.687454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:41:14.209072       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:41:14.694808       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:41:44.213215       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:41:44.701054       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:42:14.217741       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:42:14.708467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:42:44.222276       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:42:44.715543       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:43:14.227110       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:43:14.724177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e] <==
	I0731 21:34:15.948121       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:34:15.964353       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.203"]
	I0731 21:34:16.154746       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:34:16.154790       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:34:16.154812       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:34:16.160132       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:34:16.160769       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:34:16.161390       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:34:16.162806       1 config.go:192] "Starting service config controller"
	I0731 21:34:16.163070       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:34:16.163137       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:34:16.163160       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:34:16.164054       1 config.go:319] "Starting node config controller"
	I0731 21:34:16.164578       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:34:16.265306       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:34:16.265427       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:34:16.265462       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4] <==
	W0731 21:33:58.758409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:58.758440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:58.758706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 21:33:58.758733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 21:33:58.760943       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 21:33:58.761025       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:33:59.618779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 21:33:59.618886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 21:33:59.709233       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 21:33:59.709515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 21:33:59.726635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:59.726782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:59.733737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 21:33:59.734240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 21:33:59.749527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 21:33:59.750085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 21:33:59.796390       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:59.796505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:59.877049       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 21:33:59.877106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 21:33:59.913559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 21:33:59.913609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 21:33:59.925308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 21:33:59.925386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0731 21:34:00.431558       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:41:01 embed-certs-563652 kubelet[3846]: E0731 21:41:01.565502    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:41:01 embed-certs-563652 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:41:01 embed-certs-563652 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:41:01 embed-certs-563652 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:41:01 embed-certs-563652 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:41:12 embed-certs-563652 kubelet[3846]: E0731 21:41:12.549570    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:41:25 embed-certs-563652 kubelet[3846]: E0731 21:41:25.551640    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:41:36 embed-certs-563652 kubelet[3846]: E0731 21:41:36.549870    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:41:50 embed-certs-563652 kubelet[3846]: E0731 21:41:50.550150    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:42:01 embed-certs-563652 kubelet[3846]: E0731 21:42:01.567710    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:42:01 embed-certs-563652 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:42:01 embed-certs-563652 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:42:01 embed-certs-563652 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:42:01 embed-certs-563652 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:42:05 embed-certs-563652 kubelet[3846]: E0731 21:42:05.550527    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:42:19 embed-certs-563652 kubelet[3846]: E0731 21:42:19.550295    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:42:32 embed-certs-563652 kubelet[3846]: E0731 21:42:32.550601    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:42:43 embed-certs-563652 kubelet[3846]: E0731 21:42:43.550136    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:42:57 embed-certs-563652 kubelet[3846]: E0731 21:42:57.549798    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:43:01 embed-certs-563652 kubelet[3846]: E0731 21:43:01.566715    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:43:01 embed-certs-563652 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:43:01 embed-certs-563652 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:43:01 embed-certs-563652 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:43:01 embed-certs-563652 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:43:10 embed-certs-563652 kubelet[3846]: E0731 21:43:10.550082    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	
	
	==> storage-provisioner [929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb] <==
	I0731 21:34:16.329349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:34:16.349022       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:34:16.349073       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:34:16.361778       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:34:16.362404       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-563652_cd362733-b55b-4185-ae02-c8225507b2bd!
	I0731 21:34:16.363453       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27f8747c-2d1c-42ed-bb17-b255ca34a55a", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-563652_cd362733-b55b-4185-ae02-c8225507b2bd became leader
	I0731 21:34:16.463108       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-563652_cd362733-b55b-4185-ae02-c8225507b2bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-563652 -n embed-certs-563652
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-563652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-7fxm2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-563652 describe pod metrics-server-569cc877fc-7fxm2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-563652 describe pod metrics-server-569cc877fc-7fxm2: exit status 1 (64.232825ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-7fxm2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-563652 describe pod metrics-server-569cc877fc-7fxm2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0731 21:35:54.405642 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 21:37:00.019317 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-018891 -n no-preload-018891
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:43:48.812038191 +0000 UTC m=+5669.970469305
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-018891 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-018891 logs -n 25: (2.12395932s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-238338                              | cert-expiration-238338       | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-018891             | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-018891                                   | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-563652            | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275462        | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-318420 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | disable-driver-mounts-318420                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:24 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-018891                  | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-018891 --memory=2200                     | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:34 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-755535  | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC | 31 Jul 24 21:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-563652                 | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275462             | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-755535       | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC | 31 Jul 24 21:34 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:27:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:27:26.030260 1148013 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:27:26.030388 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030397 1148013 out.go:304] Setting ErrFile to fd 2...
	I0731 21:27:26.030401 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030608 1148013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:27:26.031249 1148013 out.go:298] Setting JSON to false
	I0731 21:27:26.032356 1148013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18597,"bootTime":1722442649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:27:26.032418 1148013 start.go:139] virtualization: kvm guest
	I0731 21:27:26.034938 1148013 out.go:177] * [default-k8s-diff-port-755535] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:27:26.036482 1148013 notify.go:220] Checking for updates...
	I0731 21:27:26.036489 1148013 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:27:26.038147 1148013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:27:26.039588 1148013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:27:26.040948 1148013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:27:26.042283 1148013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:27:26.043447 1148013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:27:26.045210 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:27:26.045675 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.045758 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.061309 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0731 21:27:26.061780 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.062491 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.062533 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.062921 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.063189 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.063482 1148013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:27:26.063794 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.063834 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.079162 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0731 21:27:26.079645 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.080157 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.080183 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.080542 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.080745 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.118664 1148013 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:27:26.120036 1148013 start.go:297] selected driver: kvm2
	I0731 21:27:26.120101 1148013 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.120220 1148013 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:27:26.120963 1148013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.121063 1148013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:27:26.137571 1148013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:27:26.137997 1148013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:27:26.138052 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:27:26.138065 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:27:26.138143 1148013 start.go:340] cluster config:
	{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.138260 1148013 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.140210 1148013 out.go:177] * Starting "default-k8s-diff-port-755535" primary control-plane node in "default-k8s-diff-port-755535" cluster
	I0731 21:27:26.141439 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:27:26.141487 1148013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:27:26.141498 1148013 cache.go:56] Caching tarball of preloaded images
	I0731 21:27:26.141586 1148013 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:27:26.141597 1148013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:27:26.141693 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:27:26.141896 1148013 start.go:360] acquireMachinesLock for default-k8s-diff-port-755535: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:27:27.008495 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:30.080584 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:36.160478 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:39.232498 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:45.312414 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:48.384471 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:54.464384 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:57.536420 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:03.616434 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:06.688387 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:12.768424 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:15.840395 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:21.920383 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:24.992412 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:31.072430 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:34.144440 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:37.147856 1147232 start.go:364] duration metric: took 3m32.571011548s to acquireMachinesLock for "embed-certs-563652"
	I0731 21:28:37.147925 1147232 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:37.147931 1147232 fix.go:54] fixHost starting: 
	I0731 21:28:37.148287 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:37.148321 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:37.164497 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0731 21:28:37.164970 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:37.165488 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:28:37.165514 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:37.165980 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:37.166236 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:37.166440 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:28:37.168379 1147232 fix.go:112] recreateIfNeeded on embed-certs-563652: state=Stopped err=<nil>
	I0731 21:28:37.168407 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	W0731 21:28:37.168605 1147232 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:37.170589 1147232 out.go:177] * Restarting existing kvm2 VM for "embed-certs-563652" ...
	I0731 21:28:37.171953 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Start
	I0731 21:28:37.172181 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring networks are active...
	I0731 21:28:37.173124 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network default is active
	I0731 21:28:37.173407 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network mk-embed-certs-563652 is active
	I0731 21:28:37.173963 1147232 main.go:141] libmachine: (embed-certs-563652) Getting domain xml...
	I0731 21:28:37.174662 1147232 main.go:141] libmachine: (embed-certs-563652) Creating domain...
	I0731 21:28:38.412401 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting to get IP...
	I0731 21:28:38.413198 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.413705 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.413848 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.413679 1148299 retry.go:31] will retry after 259.485128ms: waiting for machine to come up
	I0731 21:28:38.675408 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.675997 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.676020 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.675947 1148299 retry.go:31] will retry after 335.618163ms: waiting for machine to come up
	I0731 21:28:39.013788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.014375 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.014410 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.014338 1148299 retry.go:31] will retry after 367.833515ms: waiting for machine to come up
	I0731 21:28:39.383927 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.384304 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.384330 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.384282 1148299 retry.go:31] will retry after 399.641643ms: waiting for machine to come up
	I0731 21:28:37.145377 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:37.145426 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.145841 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:28:37.145876 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.146110 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:28:37.147660 1146656 machine.go:97] duration metric: took 4m34.558419201s to provisionDockerMachine
	I0731 21:28:37.147745 1146656 fix.go:56] duration metric: took 4m34.586940428s for fixHost
	I0731 21:28:37.147761 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 4m34.586994448s
	W0731 21:28:37.147782 1146656 start.go:714] error starting host: provision: host is not running
	W0731 21:28:37.147896 1146656 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 21:28:37.147905 1146656 start.go:729] Will try again in 5 seconds ...
	I0731 21:28:39.785994 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.786532 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.786564 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.786477 1148299 retry.go:31] will retry after 734.925372ms: waiting for machine to come up
	I0731 21:28:40.523580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:40.523946 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:40.523976 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:40.523897 1148299 retry.go:31] will retry after 588.684081ms: waiting for machine to come up
	I0731 21:28:41.113730 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:41.114237 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:41.114269 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:41.114163 1148299 retry.go:31] will retry after 937.611465ms: waiting for machine to come up
	I0731 21:28:42.053276 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:42.053607 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:42.053631 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:42.053567 1148299 retry.go:31] will retry after 1.025772158s: waiting for machine to come up
	I0731 21:28:43.081306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:43.081710 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:43.081739 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:43.081649 1148299 retry.go:31] will retry after 1.677045484s: waiting for machine to come up
	I0731 21:28:42.148804 1146656 start.go:360] acquireMachinesLock for no-preload-018891: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:28:44.761328 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:44.761956 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:44.761982 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:44.761903 1148299 retry.go:31] will retry after 2.317638211s: waiting for machine to come up
	I0731 21:28:47.081357 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:47.081798 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:47.081821 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:47.081742 1148299 retry.go:31] will retry after 2.614024076s: waiting for machine to come up
	I0731 21:28:49.697308 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:49.697764 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:49.697788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:49.697724 1148299 retry.go:31] will retry after 2.673090887s: waiting for machine to come up
	I0731 21:28:52.372166 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:52.372536 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:52.372567 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:52.372480 1148299 retry.go:31] will retry after 3.507450288s: waiting for machine to come up
	I0731 21:28:57.157052 1147424 start.go:364] duration metric: took 3m42.182815583s to acquireMachinesLock for "old-k8s-version-275462"
	I0731 21:28:57.157149 1147424 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:57.157159 1147424 fix.go:54] fixHost starting: 
	I0731 21:28:57.157580 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:57.157635 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:57.177971 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 21:28:57.178444 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:57.179070 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:28:57.179105 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:57.179414 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:57.179640 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:28:57.179803 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetState
	I0731 21:28:57.181518 1147424 fix.go:112] recreateIfNeeded on old-k8s-version-275462: state=Stopped err=<nil>
	I0731 21:28:57.181566 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	W0731 21:28:57.181776 1147424 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:57.184336 1147424 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275462" ...
	I0731 21:28:55.884290 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.884864 1147232 main.go:141] libmachine: (embed-certs-563652) Found IP for machine: 192.168.50.203
	I0731 21:28:55.884893 1147232 main.go:141] libmachine: (embed-certs-563652) Reserving static IP address...
	I0731 21:28:55.884911 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has current primary IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.885425 1147232 main.go:141] libmachine: (embed-certs-563652) Reserved static IP address: 192.168.50.203
	I0731 21:28:55.885445 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting for SSH to be available...
	I0731 21:28:55.885479 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.885500 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | skip adding static IP to network mk-embed-certs-563652 - found existing host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"}
	I0731 21:28:55.885515 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Getting to WaitForSSH function...
	I0731 21:28:55.887696 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888052 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.888109 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888279 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH client type: external
	I0731 21:28:55.888310 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa (-rw-------)
	I0731 21:28:55.888353 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:28:55.888371 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | About to run SSH command:
	I0731 21:28:55.888387 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | exit 0
	I0731 21:28:56.012306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | SSH cmd err, output: <nil>: 
	I0731 21:28:56.012807 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetConfigRaw
	I0731 21:28:56.013549 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.016243 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.016629 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016925 1147232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/config.json ...
	I0731 21:28:56.017152 1147232 machine.go:94] provisionDockerMachine start ...
	I0731 21:28:56.017173 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.017431 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.019693 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020075 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.020124 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020296 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.020489 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020606 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020705 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.020835 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.021131 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.021143 1147232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:28:56.120421 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:28:56.120455 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.120874 1147232 buildroot.go:166] provisioning hostname "embed-certs-563652"
	I0731 21:28:56.120911 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.121185 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.124050 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124509 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.124548 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124693 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.124936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125120 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125300 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.125456 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.125645 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.125660 1147232 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-563652 && echo "embed-certs-563652" | sudo tee /etc/hostname
	I0731 21:28:56.237674 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-563652
	
	I0731 21:28:56.237709 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.240783 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.241212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241460 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.241660 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.241850 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.242009 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.242230 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.242458 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.242479 1147232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-563652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-563652/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-563652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:28:56.353104 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:56.353138 1147232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:28:56.353165 1147232 buildroot.go:174] setting up certificates
	I0731 21:28:56.353180 1147232 provision.go:84] configureAuth start
	I0731 21:28:56.353193 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.353590 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.356346 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356736 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.356767 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356921 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.359016 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359319 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.359364 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359530 1147232 provision.go:143] copyHostCerts
	I0731 21:28:56.359595 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:28:56.359605 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:28:56.359674 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:28:56.359763 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:28:56.359772 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:28:56.359795 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:28:56.359858 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:28:56.359864 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:28:56.359886 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:28:56.359961 1147232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.embed-certs-563652 san=[127.0.0.1 192.168.50.203 embed-certs-563652 localhost minikube]
	I0731 21:28:56.517263 1147232 provision.go:177] copyRemoteCerts
	I0731 21:28:56.517324 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:28:56.517355 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.519965 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520292 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.520326 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520523 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.520745 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.520956 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.521090 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:56.602671 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:28:56.626882 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:28:56.651212 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:28:56.674469 1147232 provision.go:87] duration metric: took 321.274463ms to configureAuth
	I0731 21:28:56.674505 1147232 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:28:56.674734 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:28:56.674830 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.677835 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.678215 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678375 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.678563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678741 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678898 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.679075 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.679259 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.679275 1147232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:28:56.930106 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:28:56.930136 1147232 machine.go:97] duration metric: took 912.97079ms to provisionDockerMachine
	I0731 21:28:56.930148 1147232 start.go:293] postStartSetup for "embed-certs-563652" (driver="kvm2")
	I0731 21:28:56.930159 1147232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:28:56.930177 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.930534 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:28:56.930563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.933241 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933656 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.933689 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933795 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.934062 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.934228 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.934372 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.015059 1147232 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:28:57.019339 1147232 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:28:57.019376 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:28:57.019472 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:28:57.019581 1147232 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:28:57.019680 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:28:57.029381 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:28:57.052530 1147232 start.go:296] duration metric: took 122.364505ms for postStartSetup
	I0731 21:28:57.052583 1147232 fix.go:56] duration metric: took 19.904651181s for fixHost
	I0731 21:28:57.052612 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.055423 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.055802 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.055852 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.056142 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.056343 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056494 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056668 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.056844 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:57.057017 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:57.057028 1147232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:28:57.156776 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461337.115873615
	
	I0731 21:28:57.156816 1147232 fix.go:216] guest clock: 1722461337.115873615
	I0731 21:28:57.156847 1147232 fix.go:229] Guest: 2024-07-31 21:28:57.115873615 +0000 UTC Remote: 2024-07-31 21:28:57.05258776 +0000 UTC m=+232.627404404 (delta=63.285855ms)
	I0731 21:28:57.156883 1147232 fix.go:200] guest clock delta is within tolerance: 63.285855ms
	I0731 21:28:57.156901 1147232 start.go:83] releasing machines lock for "embed-certs-563652", held for 20.008989513s
	I0731 21:28:57.156936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.157244 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:57.159882 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160307 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.160334 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160545 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161086 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161266 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161349 1147232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:28:57.161394 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.161460 1147232 ssh_runner.go:195] Run: cat /version.json
	I0731 21:28:57.161481 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.164126 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164511 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.164552 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164583 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164719 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.164942 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165001 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.165022 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.165106 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165194 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.165277 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.165369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165536 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165692 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.261717 1147232 ssh_runner.go:195] Run: systemctl --version
	I0731 21:28:57.267459 1147232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:28:57.412757 1147232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:28:57.418248 1147232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:28:57.418317 1147232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:28:57.437752 1147232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:28:57.437786 1147232 start.go:495] detecting cgroup driver to use...
	I0731 21:28:57.437874 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:28:57.456832 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:28:57.472719 1147232 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:28:57.472803 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:28:57.486630 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:28:57.500635 1147232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:28:57.626291 1147232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:28:57.775374 1147232 docker.go:233] disabling docker service ...
	I0731 21:28:57.775563 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:28:57.789797 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:28:57.803545 1147232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:28:57.944871 1147232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:28:58.088067 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:28:58.112885 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:28:58.133234 1147232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:28:58.133301 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.144149 1147232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:28:58.144234 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.154684 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.165572 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.176638 1147232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:28:58.187948 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.198949 1147232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.219594 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.230888 1147232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:28:58.241112 1147232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:28:58.241175 1147232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:28:58.255158 1147232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:28:58.265191 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:28:58.401923 1147232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:28:58.534900 1147232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:28:58.534980 1147232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:28:58.539618 1147232 start.go:563] Will wait 60s for crictl version
	I0731 21:28:58.539700 1147232 ssh_runner.go:195] Run: which crictl
	I0731 21:28:58.543605 1147232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:28:58.578544 1147232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:28:58.578653 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.608074 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.638975 1147232 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:28:58.640454 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:58.643630 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644168 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:58.644204 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644497 1147232 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 21:28:58.648555 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:28:58.661131 1147232 kubeadm.go:883] updating cluster {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:28:58.661262 1147232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:28:58.661307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:28:58.696977 1147232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:28:58.697058 1147232 ssh_runner.go:195] Run: which lz4
	I0731 21:28:58.700913 1147232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:28:58.705097 1147232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:28:58.705135 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:28:57.185854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .Start
	I0731 21:28:57.186093 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring networks are active...
	I0731 21:28:57.186915 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network default is active
	I0731 21:28:57.187268 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network mk-old-k8s-version-275462 is active
	I0731 21:28:57.187627 1147424 main.go:141] libmachine: (old-k8s-version-275462) Getting domain xml...
	I0731 21:28:57.188447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:28:58.502711 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting to get IP...
	I0731 21:28:58.503791 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.504272 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.504341 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.504250 1148436 retry.go:31] will retry after 309.193175ms: waiting for machine to come up
	I0731 21:28:58.815172 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.815690 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.815745 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.815657 1148436 retry.go:31] will retry after 271.329404ms: waiting for machine to come up
	I0731 21:28:59.089281 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.089738 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.089778 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.089705 1148436 retry.go:31] will retry after 354.250517ms: waiting for machine to come up
	I0731 21:28:59.445390 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.445869 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.445895 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.445823 1148436 retry.go:31] will retry after 434.740787ms: waiting for machine to come up
	I0731 21:29:00.142120 1147232 crio.go:462] duration metric: took 1.441232682s to copy over tarball
	I0731 21:29:00.142222 1147232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:02.454101 1147232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.311834948s)
	I0731 21:29:02.454139 1147232 crio.go:469] duration metric: took 2.311975688s to extract the tarball
	I0731 21:29:02.454150 1147232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:02.493307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:02.541225 1147232 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:02.541257 1147232 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:02.541268 1147232 kubeadm.go:934] updating node { 192.168.50.203 8443 v1.30.3 crio true true} ...
	I0731 21:29:02.541448 1147232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-563652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:02.541548 1147232 ssh_runner.go:195] Run: crio config
	I0731 21:29:02.586951 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:02.586976 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:02.586989 1147232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:02.587016 1147232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.203 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-563652 NodeName:embed-certs-563652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:02.587188 1147232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-563652"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:02.587287 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:02.598944 1147232 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:02.599041 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:02.610271 1147232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0731 21:29:02.627952 1147232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:02.644727 1147232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0731 21:29:02.661985 1147232 ssh_runner.go:195] Run: grep 192.168.50.203	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:02.665903 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:02.678010 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:02.809768 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:02.826650 1147232 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652 for IP: 192.168.50.203
	I0731 21:29:02.826682 1147232 certs.go:194] generating shared ca certs ...
	I0731 21:29:02.826704 1147232 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:02.826923 1147232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:02.826988 1147232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:02.827005 1147232 certs.go:256] generating profile certs ...
	I0731 21:29:02.827126 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/client.key
	I0731 21:29:02.827208 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key.0963b177
	I0731 21:29:02.827279 1147232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key
	I0731 21:29:02.827458 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:02.827515 1147232 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:02.827533 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:02.827563 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:02.827598 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:02.827630 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:02.827690 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:02.828735 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:02.862923 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:02.907648 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:02.950647 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:02.978032 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 21:29:03.007119 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:29:03.031483 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:03.055190 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:03.079296 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:03.102817 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:03.126115 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:03.149887 1147232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:03.167213 1147232 ssh_runner.go:195] Run: openssl version
	I0731 21:29:03.172827 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:03.183821 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188216 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188290 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.193896 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:03.204706 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:03.215687 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220061 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220148 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.226469 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:03.237668 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:03.248629 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.252962 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.253032 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.258590 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:03.269656 1147232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:03.274277 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:03.280438 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:03.286378 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:03.292717 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:03.298776 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:03.305022 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:03.311507 1147232 kubeadm.go:392] StartCluster: {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:03.311608 1147232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:03.311676 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.349359 1147232 cri.go:89] found id: ""
	I0731 21:29:03.349457 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:03.359993 1147232 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:03.360015 1147232 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:03.360058 1147232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:03.371322 1147232 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:03.372350 1147232 kubeconfig.go:125] found "embed-certs-563652" server: "https://192.168.50.203:8443"
	I0731 21:29:03.374391 1147232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:03.386008 1147232 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.203
	I0731 21:29:03.386053 1147232 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:03.386069 1147232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:03.386141 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.428902 1147232 cri.go:89] found id: ""
	I0731 21:29:03.429001 1147232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:03.445950 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:03.455917 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:03.455954 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:03.456007 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:03.465688 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:03.465757 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:03.475699 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:03.485103 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:03.485179 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:03.495141 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.504430 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:03.504532 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.514523 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:03.524199 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:03.524280 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:03.533924 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:03.546105 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:03.656770 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:28:59.882326 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.882926 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.882959 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.882880 1148436 retry.go:31] will retry after 563.345278ms: waiting for machine to come up
	I0731 21:29:00.447702 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:00.448213 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:00.448245 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:00.448155 1148436 retry.go:31] will retry after 605.062991ms: waiting for machine to come up
	I0731 21:29:01.055120 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.055541 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.055564 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.055484 1148436 retry.go:31] will retry after 781.785142ms: waiting for machine to come up
	I0731 21:29:01.838536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.839123 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.839148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.839075 1148436 retry.go:31] will retry after 1.037287171s: waiting for machine to come up
	I0731 21:29:02.878421 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:02.878828 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:02.878860 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:02.878794 1148436 retry.go:31] will retry after 1.796829213s: waiting for machine to come up
	I0731 21:29:04.677338 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:04.677928 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:04.677963 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:04.677848 1148436 retry.go:31] will retry after 2.083632912s: waiting for machine to come up
	I0731 21:29:04.982138 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.325308339s)
	I0731 21:29:04.982177 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.196591 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.261920 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.343027 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:05.343137 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:05.844024 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.344246 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.360837 1147232 api_server.go:72] duration metric: took 1.017810929s to wait for apiserver process to appear ...
	I0731 21:29:06.360880 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:06.360916 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:06.361563 1147232 api_server.go:269] stopped: https://192.168.50.203:8443/healthz: Get "https://192.168.50.203:8443/healthz": dial tcp 192.168.50.203:8443: connect: connection refused
	I0731 21:29:06.861091 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.297633 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.297674 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.297691 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.335524 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.335568 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.361820 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.374624 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.374671 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:06.764436 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:06.764979 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:06.765012 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:06.764918 1148436 retry.go:31] will retry after 2.092811182s: waiting for machine to come up
	I0731 21:29:08.860056 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:08.860536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:08.860571 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:08.860498 1148436 retry.go:31] will retry after 2.731015709s: waiting for machine to come up
	I0731 21:29:09.861443 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.865941 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.865978 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.361710 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.365984 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:10.366014 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.861702 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.866015 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:29:10.872799 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:10.872831 1147232 api_server.go:131] duration metric: took 4.511944174s to wait for apiserver health ...
	I0731 21:29:10.872842 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:10.872848 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:10.874719 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:10.876229 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:10.886256 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:10.903893 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:10.913974 1147232 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:10.914021 1147232 system_pods.go:61] "coredns-7db6d8ff4d-kscsg" [260d2d5f-fd44-4a0a-813b-fab424728e55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:10.914031 1147232 system_pods.go:61] "etcd-embed-certs-563652" [e278abd0-801d-4156-bcc4-8f0d35a34b2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:10.914045 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [1398c865-6871-45c2-ad93-45b629d1d3c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:10.914055 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [0fbefc31-9024-41cb-b56a-944add33a901] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:10.914066 1147232 system_pods.go:61] "kube-proxy-m4www" [cb2d9b36-d71f-4986-9fb1-547e76fd2e77] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:10.914076 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [15887051-7657-4bf6-a9ca-3d834d8eb4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:10.914089 1147232 system_pods.go:61] "metrics-server-569cc877fc-6jkw9" [eb41d2c6-c267-486d-83eb-25e5578b1e6e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:10.914100 1147232 system_pods.go:61] "storage-provisioner" [5fc70da7-6dac-4e44-865c-495fd5fec485] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:10.914112 1147232 system_pods.go:74] duration metric: took 10.188078ms to wait for pod list to return data ...
	I0731 21:29:10.914125 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:10.917224 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:10.917258 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:10.917272 1147232 node_conditions.go:105] duration metric: took 3.140281ms to run NodePressure ...
	I0731 21:29:10.917294 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:11.176463 1147232 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180506 1147232 kubeadm.go:739] kubelet initialised
	I0731 21:29:11.180529 1147232 kubeadm.go:740] duration metric: took 4.03724ms waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180540 1147232 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:11.185366 1147232 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:13.197693 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:11.594836 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:11.595339 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:11.595374 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:11.595293 1148436 retry.go:31] will retry after 4.520307648s: waiting for machine to come up
	I0731 21:29:17.633145 1148013 start.go:364] duration metric: took 1m51.491197772s to acquireMachinesLock for "default-k8s-diff-port-755535"
	I0731 21:29:17.633242 1148013 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:17.633255 1148013 fix.go:54] fixHost starting: 
	I0731 21:29:17.633764 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:17.633823 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:17.654593 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0731 21:29:17.655124 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:17.655734 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:17.655770 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:17.656109 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:17.656359 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:17.656530 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:17.658542 1148013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-755535: state=Stopped err=<nil>
	I0731 21:29:17.658585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	W0731 21:29:17.658784 1148013 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:17.660580 1148013 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-755535" ...
	I0731 21:29:16.120431 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120937 1147424 main.go:141] libmachine: (old-k8s-version-275462) Found IP for machine: 192.168.72.107
	I0731 21:29:16.120961 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has current primary IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120968 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserving static IP address...
	I0731 21:29:16.121466 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.121508 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserved static IP address: 192.168.72.107
	I0731 21:29:16.121528 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | skip adding static IP to network mk-old-k8s-version-275462 - found existing host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"}
	I0731 21:29:16.121561 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Getting to WaitForSSH function...
	I0731 21:29:16.121599 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting for SSH to be available...
	I0731 21:29:16.123460 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123825 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.123849 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123954 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH client type: external
	I0731 21:29:16.123988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa (-rw-------)
	I0731 21:29:16.124019 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:16.124034 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | About to run SSH command:
	I0731 21:29:16.124049 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | exit 0
	I0731 21:29:16.244331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:16.244741 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:29:16.245387 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.248072 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248502 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.248529 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248857 1147424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:29:16.249132 1147424 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:16.249162 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:16.249412 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.252283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252657 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.252687 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.253096 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253286 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253433 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.253606 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.253875 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.253895 1147424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:16.356702 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:16.356743 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357088 1147424 buildroot.go:166] provisioning hostname "old-k8s-version-275462"
	I0731 21:29:16.357116 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357303 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.361044 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361504 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.361540 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361801 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.362037 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362252 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362430 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.362618 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.362866 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.362884 1147424 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275462 && echo "old-k8s-version-275462" | sudo tee /etc/hostname
	I0731 21:29:16.478590 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275462
	
	I0731 21:29:16.478635 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.481767 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.482184 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482467 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.482716 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.482888 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.483083 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.483323 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.483529 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.483554 1147424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275462/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:16.597465 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:16.597515 1147424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:16.597549 1147424 buildroot.go:174] setting up certificates
	I0731 21:29:16.597563 1147424 provision.go:84] configureAuth start
	I0731 21:29:16.597578 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.597901 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.600943 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601347 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.601388 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601582 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.604296 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604757 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.604787 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604950 1147424 provision.go:143] copyHostCerts
	I0731 21:29:16.605019 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:16.605037 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:16.605108 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:16.605235 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:16.605249 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:16.605285 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:16.605370 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:16.605381 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:16.605407 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:16.605474 1147424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275462 san=[127.0.0.1 192.168.72.107 localhost minikube old-k8s-version-275462]
	I0731 21:29:16.959571 1147424 provision.go:177] copyRemoteCerts
	I0731 21:29:16.959637 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:16.959671 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.962543 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.962955 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.962988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.963253 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.963483 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.963690 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.963885 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.047050 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:17.072833 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 21:29:17.099214 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:17.125846 1147424 provision.go:87] duration metric: took 528.260173ms to configureAuth
	I0731 21:29:17.125892 1147424 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:17.126109 1147424 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:29:17.126194 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.129283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129568 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.129602 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129926 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.130232 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130458 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130601 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.130820 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.131002 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.131016 1147424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:17.395537 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:17.395569 1147424 machine.go:97] duration metric: took 1.146418308s to provisionDockerMachine
	I0731 21:29:17.395581 1147424 start.go:293] postStartSetup for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:29:17.395598 1147424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:17.395639 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.395987 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:17.396024 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.398916 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399233 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.399264 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.399674 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.399854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.400026 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.483331 1147424 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:17.487820 1147424 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:17.487856 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:17.487925 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:17.488012 1147424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:17.488186 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:17.499484 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:17.525699 1147424 start.go:296] duration metric: took 130.099417ms for postStartSetup
	I0731 21:29:17.525756 1147424 fix.go:56] duration metric: took 20.368597161s for fixHost
	I0731 21:29:17.525785 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.529040 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529525 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.529570 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.530095 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530310 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530481 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.530704 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.530879 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.530890 1147424 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:17.632991 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461357.608223429
	
	I0731 21:29:17.633011 1147424 fix.go:216] guest clock: 1722461357.608223429
	I0731 21:29:17.633018 1147424 fix.go:229] Guest: 2024-07-31 21:29:17.608223429 +0000 UTC Remote: 2024-07-31 21:29:17.525761122 +0000 UTC m=+242.704537445 (delta=82.462307ms)
	I0731 21:29:17.633040 1147424 fix.go:200] guest clock delta is within tolerance: 82.462307ms
	I0731 21:29:17.633045 1147424 start.go:83] releasing machines lock for "old-k8s-version-275462", held for 20.475925282s
	I0731 21:29:17.633069 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.633360 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:17.636188 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636565 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.636598 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636792 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637346 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637569 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637674 1147424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:17.637721 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.637831 1147424 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:17.637861 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.640574 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640772 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640966 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.640996 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641174 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641297 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.641331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641371 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641511 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641564 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641680 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641846 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641886 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.642184 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.716822 1147424 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:17.741404 1147424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:17.892700 1147424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:17.899143 1147424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:17.899252 1147424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:17.915997 1147424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:17.916032 1147424 start.go:495] detecting cgroup driver to use...
	I0731 21:29:17.916133 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:17.933847 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:17.948471 1147424 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:17.948565 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:17.963294 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:17.978417 1147424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:18.100521 1147424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:18.243022 1147424 docker.go:233] disabling docker service ...
	I0731 21:29:18.243104 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:18.258762 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:18.272012 1147424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:18.421137 1147424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:18.564600 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:18.581019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:18.601426 1147424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:29:18.601504 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.617312 1147424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:18.617400 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.631697 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.642487 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.654548 1147424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:18.666338 1147424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:18.676326 1147424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:18.676406 1147424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:18.690225 1147424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:18.702315 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:18.836795 1147424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:18.977840 1147424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:18.977930 1147424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:18.984979 1147424 start.go:563] Will wait 60s for crictl version
	I0731 21:29:18.985059 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:18.989654 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:19.033602 1147424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:19.033701 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.061583 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.093228 1147424 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:29:15.692077 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:18.191423 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:19.094804 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:19.098122 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.098620 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:19.098648 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.099016 1147424 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:19.103372 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:19.117035 1147424 kubeadm.go:883] updating cluster {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:19.117205 1147424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:29:19.117275 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:19.163252 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:19.163343 1147424 ssh_runner.go:195] Run: which lz4
	I0731 21:29:19.168173 1147424 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:19.172513 1147424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:19.172576 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:29:17.662009 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Start
	I0731 21:29:17.662245 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring networks are active...
	I0731 21:29:17.663121 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network default is active
	I0731 21:29:17.663583 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network mk-default-k8s-diff-port-755535 is active
	I0731 21:29:17.664059 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Getting domain xml...
	I0731 21:29:17.664837 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Creating domain...
	I0731 21:29:18.989801 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting to get IP...
	I0731 21:29:18.990936 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991376 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991428 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:18.991344 1148583 retry.go:31] will retry after 247.770384ms: waiting for machine to come up
	I0731 21:29:19.241063 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241658 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.241549 1148583 retry.go:31] will retry after 287.808437ms: waiting for machine to come up
	I0731 21:29:19.531237 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531849 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531875 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.531777 1148583 retry.go:31] will retry after 317.584035ms: waiting for machine to come up
	I0731 21:29:19.851691 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852167 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852202 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.852128 1148583 retry.go:31] will retry after 555.57435ms: waiting for machine to come up
	I0731 21:29:20.409812 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410356 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410392 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:20.410280 1148583 retry.go:31] will retry after 721.969177ms: waiting for machine to come up
	I0731 21:29:20.195383 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:20.703603 1147232 pod_ready.go:92] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.703634 1147232 pod_ready.go:81] duration metric: took 9.51823955s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.703649 1147232 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724000 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.724036 1147232 pod_ready.go:81] duration metric: took 20.374673ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724051 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732302 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.732326 1147232 pod_ready.go:81] duration metric: took 8.267565ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732340 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747581 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.747609 1147232 pod_ready.go:81] duration metric: took 2.015261928s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747619 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753322 1147232 pod_ready.go:92] pod "kube-proxy-m4www" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.753348 1147232 pod_ready.go:81] duration metric: took 5.72252ms for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753359 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758310 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.758335 1147232 pod_ready.go:81] duration metric: took 4.970124ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758346 1147232 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.731858 1147424 crio.go:462] duration metric: took 1.563734165s to copy over tarball
	I0731 21:29:20.732033 1147424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:23.813579 1147424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081445019s)
	I0731 21:29:23.813629 1147424 crio.go:469] duration metric: took 3.081657576s to extract the tarball
	I0731 21:29:23.813640 1147424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:23.855937 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:23.892640 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:23.892676 1147424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:23.892772 1147424 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.892797 1147424 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.892852 1147424 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.892776 1147424 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.893142 1147424 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.893240 1147424 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:29:23.893343 1147424 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.893348 1147424 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.894880 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.895111 1147424 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.894968 1147424 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.895194 1147424 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.895489 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.895587 1147424 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:29:24.036855 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.039761 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.042658 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.045088 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.045098 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.048688 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.088535 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:29:24.218808 1147424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:29:24.218845 1147424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:29:24.218881 1147424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:29:24.218918 1147424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.218930 1147424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:29:24.218936 1147424 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:29:24.218943 1147424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.218965 1147424 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.218978 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.219058 1147424 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:29:24.219078 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219079 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219084 1147424 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.219135 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238540 1147424 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:29:24.238602 1147424 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:29:24.238653 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238678 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.238697 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.238736 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.238794 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.238802 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.238851 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.366795 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:29:24.371307 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:29:24.371394 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:29:24.371436 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:29:24.371516 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:29:24.380026 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:29:24.380043 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:29:24.412112 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:29:24.523420 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:24.671943 1147424 cache_images.go:92] duration metric: took 779.240281ms to LoadCachedImages
	W0731 21:29:24.672078 1147424 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0731 21:29:24.672114 1147424 kubeadm.go:934] updating node { 192.168.72.107 8443 v1.20.0 crio true true} ...
	I0731 21:29:24.672267 1147424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:24.672897 1147424 ssh_runner.go:195] Run: crio config
	I0731 21:29:24.722662 1147424 cni.go:84] Creating CNI manager for ""
	I0731 21:29:24.722686 1147424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:24.722696 1147424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:24.722717 1147424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275462 NodeName:old-k8s-version-275462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:29:24.722892 1147424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275462"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:24.722962 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:29:24.733178 1147424 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:24.733273 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:24.743515 1147424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 21:29:24.760826 1147424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:24.779805 1147424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:29:24.798560 1147424 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:24.802406 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:24.815015 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:21.134251 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134731 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134764 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:21.134687 1148583 retry.go:31] will retry after 934.566416ms: waiting for machine to come up
	I0731 21:29:22.071038 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071631 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.071554 1148583 retry.go:31] will retry after 884.282326ms: waiting for machine to come up
	I0731 21:29:22.957241 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957617 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.957599 1148583 retry.go:31] will retry after 1.014946816s: waiting for machine to come up
	I0731 21:29:23.974435 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974845 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974883 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:23.974807 1148583 retry.go:31] will retry after 1.519800108s: waiting for machine to come up
	I0731 21:29:25.496770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497303 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:25.497249 1148583 retry.go:31] will retry after 1.739198883s: waiting for machine to come up
	I0731 21:29:24.767123 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:27.265952 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:29.266044 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:24.937628 1147424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:24.956917 1147424 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462 for IP: 192.168.72.107
	I0731 21:29:24.956949 1147424 certs.go:194] generating shared ca certs ...
	I0731 21:29:24.956972 1147424 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:24.957180 1147424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:24.957243 1147424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:24.957258 1147424 certs.go:256] generating profile certs ...
	I0731 21:29:24.957385 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key
	I0731 21:29:24.957468 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421
	I0731 21:29:24.957520 1147424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key
	I0731 21:29:24.957676 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:24.957719 1147424 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:24.957734 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:24.957770 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:24.957805 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:24.957837 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:24.957898 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:24.958772 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:24.998159 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:25.057520 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:25.098374 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:25.140601 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:29:25.187540 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:25.213821 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:25.240997 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:25.266970 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:25.292340 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:25.318838 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:25.344071 1147424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:25.361756 1147424 ssh_runner.go:195] Run: openssl version
	I0731 21:29:25.368009 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:25.379741 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.384975 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.385052 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.390894 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:25.403007 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:25.415067 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422223 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422310 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.429842 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:25.440874 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:25.451684 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456190 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456259 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.462311 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:25.474253 1147424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:25.479088 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:25.485188 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:25.491404 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:25.498223 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:25.504935 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:25.511202 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:25.517628 1147424 kubeadm.go:392] StartCluster: {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:25.517767 1147424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:25.517832 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.555145 1147424 cri.go:89] found id: ""
	I0731 21:29:25.555227 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:25.565732 1147424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:25.565758 1147424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:25.565821 1147424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:25.575700 1147424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:25.576730 1147424 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:25.577437 1147424 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-275462" cluster setting kubeconfig missing "old-k8s-version-275462" context setting]
	I0731 21:29:25.578357 1147424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:25.626975 1147424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:25.637707 1147424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.107
	I0731 21:29:25.637758 1147424 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:25.637773 1147424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:25.637826 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.674153 1147424 cri.go:89] found id: ""
	I0731 21:29:25.674240 1147424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:25.692354 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:25.703047 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:25.703081 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:25.703140 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:25.712766 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:25.712884 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:25.723121 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:25.732767 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:25.732846 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:25.743055 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.752622 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:25.752699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.763763 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:25.773620 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:25.773699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:25.784175 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:25.794182 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:25.908515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.676104 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.891081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.024837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.100397 1147424 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:27.100499 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.600582 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.101391 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.601068 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.101502 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.600838 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.239418 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239872 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:27.239806 1148583 retry.go:31] will retry after 1.907805681s: waiting for machine to come up
	I0731 21:29:29.149605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150022 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150049 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:29.149966 1148583 retry.go:31] will retry after 3.584697795s: waiting for machine to come up
	I0731 21:29:31.765270 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:34.264994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:30.101071 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:30.601377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.100907 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.600736 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.100741 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.601406 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.100616 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.601476 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.101619 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.601270 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.736055 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736539 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:32.736495 1148583 retry.go:31] will retry after 4.026783834s: waiting for machine to come up
	I0731 21:29:38.016998 1146656 start.go:364] duration metric: took 55.868098686s to acquireMachinesLock for "no-preload-018891"
	I0731 21:29:38.017060 1146656 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:38.017069 1146656 fix.go:54] fixHost starting: 
	I0731 21:29:38.017509 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:38.017552 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:38.036034 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0731 21:29:38.036681 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:38.037291 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:29:38.037319 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:38.037687 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:38.037920 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:38.038078 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:29:38.040079 1146656 fix.go:112] recreateIfNeeded on no-preload-018891: state=Stopped err=<nil>
	I0731 21:29:38.040133 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	W0731 21:29:38.040317 1146656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:38.042575 1146656 out.go:177] * Restarting existing kvm2 VM for "no-preload-018891" ...
	I0731 21:29:36.766344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:39.265931 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:36.767067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767688 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has current primary IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767744 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Found IP for machine: 192.168.39.145
	I0731 21:29:36.767774 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserving static IP address...
	I0731 21:29:36.768193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.768234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | skip adding static IP to network mk-default-k8s-diff-port-755535 - found existing host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"}
	I0731 21:29:36.768256 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserved static IP address: 192.168.39.145
	I0731 21:29:36.768277 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for SSH to be available...
	I0731 21:29:36.768292 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Getting to WaitForSSH function...
	I0731 21:29:36.770423 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.770710 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770880 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH client type: external
	I0731 21:29:36.770909 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa (-rw-------)
	I0731 21:29:36.770966 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:36.770989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | About to run SSH command:
	I0731 21:29:36.771004 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | exit 0
	I0731 21:29:36.892321 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:36.892633 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetConfigRaw
	I0731 21:29:36.893372 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:36.896249 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896647 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.896682 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896983 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:29:36.897231 1148013 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:36.897253 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:36.897507 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:36.900381 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900794 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.900832 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900940 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:36.901137 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901403 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:36.901591 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:36.901809 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:36.901823 1148013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:37.004424 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:37.004459 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004749 1148013 buildroot.go:166] provisioning hostname "default-k8s-diff-port-755535"
	I0731 21:29:37.004770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.007987 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008391 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.008439 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.008802 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.008981 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.009190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.009374 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.009588 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.009602 1148013 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-755535 && echo "default-k8s-diff-port-755535" | sudo tee /etc/hostname
	I0731 21:29:37.127160 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-755535
	
	I0731 21:29:37.127190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.130282 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130701 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.130737 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130924 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.131178 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131389 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131537 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.131778 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.132017 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.132037 1148013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-755535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-755535/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-755535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:37.245157 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:37.245201 1148013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:37.245255 1148013 buildroot.go:174] setting up certificates
	I0731 21:29:37.245268 1148013 provision.go:84] configureAuth start
	I0731 21:29:37.245283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.245628 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:37.248611 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.248910 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.248944 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.249109 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.251332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251698 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.251727 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251911 1148013 provision.go:143] copyHostCerts
	I0731 21:29:37.251973 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:37.251983 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:37.252036 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:37.252164 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:37.252173 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:37.252196 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:37.252258 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:37.252265 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:37.252283 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:37.252334 1148013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-755535 san=[127.0.0.1 192.168.39.145 default-k8s-diff-port-755535 localhost minikube]
	I0731 21:29:37.356985 1148013 provision.go:177] copyRemoteCerts
	I0731 21:29:37.357046 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:37.357077 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.359635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.359985 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.360014 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.360217 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.360421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.360670 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.360815 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.442709 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:37.467795 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 21:29:37.492389 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:29:37.515837 1148013 provision.go:87] duration metric: took 270.547831ms to configureAuth
	I0731 21:29:37.515882 1148013 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:37.516070 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:37.516200 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.519062 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519432 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.519469 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519695 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.519920 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520141 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520323 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.520481 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.520701 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.520726 1148013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:37.780006 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:37.780033 1148013 machine.go:97] duration metric: took 882.786941ms to provisionDockerMachine
	I0731 21:29:37.780047 1148013 start.go:293] postStartSetup for "default-k8s-diff-port-755535" (driver="kvm2")
	I0731 21:29:37.780059 1148013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:37.780081 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:37.780459 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:37.780493 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.783495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.783853 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.783886 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.784068 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.784322 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.784531 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.784714 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.866990 1148013 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:37.871294 1148013 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:37.871329 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:37.871408 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:37.871483 1148013 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:37.871584 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:37.881107 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:37.906964 1148013 start.go:296] duration metric: took 126.897843ms for postStartSetup
	I0731 21:29:37.907016 1148013 fix.go:56] duration metric: took 20.273760895s for fixHost
	I0731 21:29:37.907045 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.910120 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910452 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.910495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910747 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.910965 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911119 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911255 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.911448 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.911690 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.911705 1148013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:38.016788 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461377.990571620
	
	I0731 21:29:38.016818 1148013 fix.go:216] guest clock: 1722461377.990571620
	I0731 21:29:38.016830 1148013 fix.go:229] Guest: 2024-07-31 21:29:37.99057162 +0000 UTC Remote: 2024-07-31 21:29:37.907020915 +0000 UTC m=+131.913986687 (delta=83.550705ms)
	I0731 21:29:38.016876 1148013 fix.go:200] guest clock delta is within tolerance: 83.550705ms
	I0731 21:29:38.016883 1148013 start.go:83] releasing machines lock for "default-k8s-diff-port-755535", held for 20.383695886s
	I0731 21:29:38.016916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.017234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:38.019995 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020405 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.020436 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020641 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021180 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021387 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021485 1148013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:38.021536 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.021665 1148013 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:38.021693 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.024445 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024777 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024913 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.024946 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025214 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025258 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.025291 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025461 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025626 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.025640 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025915 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025907 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.026067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.026237 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.129588 1148013 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:38.135557 1148013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:38.276230 1148013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:38.281894 1148013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:38.281977 1148013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:38.298709 1148013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:38.298742 1148013 start.go:495] detecting cgroup driver to use...
	I0731 21:29:38.298815 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:38.316212 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:38.331845 1148013 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:38.331925 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:38.350284 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:38.365411 1148013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:38.502379 1148013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:38.659435 1148013 docker.go:233] disabling docker service ...
	I0731 21:29:38.659544 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:38.676451 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:38.692936 1148013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:38.843766 1148013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:38.974723 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:38.989514 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:39.009753 1148013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:29:39.009822 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.020785 1148013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:39.020857 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.031679 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.047024 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.061692 1148013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:39.072901 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.084049 1148013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.101694 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.118920 1148013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:39.128796 1148013 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:39.128869 1148013 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:39.143329 1148013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:39.153376 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:39.278414 1148013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:39.427377 1148013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:39.427493 1148013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:39.432178 1148013 start.go:563] Will wait 60s for crictl version
	I0731 21:29:39.432262 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:29:39.435949 1148013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:39.470366 1148013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:39.470494 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.498247 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.531071 1148013 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:29:35.101055 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:35.600782 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.101344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.600794 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.101402 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.601198 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.100947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.601332 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.101351 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.601319 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.532416 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:39.535677 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536015 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:39.536046 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536341 1148013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:39.540305 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:39.553333 1148013 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:39.553464 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:29:39.553514 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:39.592137 1148013 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:29:39.592216 1148013 ssh_runner.go:195] Run: which lz4
	I0731 21:29:39.596215 1148013 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:39.600203 1148013 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:39.600244 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:29:41.004825 1148013 crio.go:462] duration metric: took 1.408653613s to copy over tarball
	I0731 21:29:41.004930 1148013 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:38.043667 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Start
	I0731 21:29:38.043892 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring networks are active...
	I0731 21:29:38.044764 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network default is active
	I0731 21:29:38.045177 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network mk-no-preload-018891 is active
	I0731 21:29:38.045594 1146656 main.go:141] libmachine: (no-preload-018891) Getting domain xml...
	I0731 21:29:38.046459 1146656 main.go:141] libmachine: (no-preload-018891) Creating domain...
	I0731 21:29:39.353762 1146656 main.go:141] libmachine: (no-preload-018891) Waiting to get IP...
	I0731 21:29:39.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.355279 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.355383 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.355255 1148782 retry.go:31] will retry after 234.245005ms: waiting for machine to come up
	I0731 21:29:39.590814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.591332 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.591358 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.591270 1148782 retry.go:31] will retry after 362.949809ms: waiting for machine to come up
	I0731 21:29:39.956112 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.956694 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.956721 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.956639 1148782 retry.go:31] will retry after 469.324659ms: waiting for machine to come up
	I0731 21:29:40.427518 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.427997 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.428027 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.427953 1148782 retry.go:31] will retry after 463.172567ms: waiting for machine to come up
	I0731 21:29:40.893318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.893864 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.893890 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.893824 1148782 retry.go:31] will retry after 599.834904ms: waiting for machine to come up
	I0731 21:29:41.495844 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:41.496342 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:41.496372 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:41.496291 1148782 retry.go:31] will retry after 856.360903ms: waiting for machine to come up
	I0731 21:29:41.266267 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:43.267009 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:40.101530 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:40.601303 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.100720 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.600723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.100890 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.100765 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.101217 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.356436 1148013 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.351465263s)
	I0731 21:29:43.356470 1148013 crio.go:469] duration metric: took 2.351606996s to extract the tarball
	I0731 21:29:43.356479 1148013 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:43.397583 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:43.443757 1148013 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:43.443784 1148013 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:43.443793 1148013 kubeadm.go:934] updating node { 192.168.39.145 8444 v1.30.3 crio true true} ...
	I0731 21:29:43.443954 1148013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-755535 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:43.444026 1148013 ssh_runner.go:195] Run: crio config
	I0731 21:29:43.494935 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:43.494959 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:43.494973 1148013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:43.495006 1148013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-755535 NodeName:default-k8s-diff-port-755535 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:43.495210 1148013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-755535"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:43.495303 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:43.505057 1148013 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:43.505176 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:43.514741 1148013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 21:29:43.534865 1148013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:43.554763 1148013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 21:29:43.572433 1148013 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:43.577403 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:43.592858 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:43.737530 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:43.754632 1148013 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535 for IP: 192.168.39.145
	I0731 21:29:43.754662 1148013 certs.go:194] generating shared ca certs ...
	I0731 21:29:43.754686 1148013 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:43.754900 1148013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:43.754960 1148013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:43.754976 1148013 certs.go:256] generating profile certs ...
	I0731 21:29:43.755093 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/client.key
	I0731 21:29:43.755177 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key.22420a8f
	I0731 21:29:43.755227 1148013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key
	I0731 21:29:43.755381 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:43.755424 1148013 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:43.755434 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:43.755455 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:43.755480 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:43.755500 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:43.755539 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:43.756235 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:43.800725 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:43.835648 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:43.880032 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:43.915459 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 21:29:43.943694 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:43.968578 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:43.993192 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:29:44.017364 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:44.041303 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:44.065792 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:44.089991 1148013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:44.107888 1148013 ssh_runner.go:195] Run: openssl version
	I0731 21:29:44.113758 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:44.125576 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130648 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130727 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.137311 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:44.149135 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:44.160439 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165263 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165329 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.171250 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:44.182798 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:44.194037 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198577 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198658 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.204406 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:44.215573 1148013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:44.221587 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:44.229391 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:44.237371 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:44.244379 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:44.250414 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:44.256557 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:44.262804 1148013 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:44.262928 1148013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:44.262993 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.298720 1148013 cri.go:89] found id: ""
	I0731 21:29:44.298826 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:44.310173 1148013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:44.310199 1148013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:44.310258 1148013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:44.321273 1148013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:44.322769 1148013 kubeconfig.go:125] found "default-k8s-diff-port-755535" server: "https://192.168.39.145:8444"
	I0731 21:29:44.325832 1148013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:44.336366 1148013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.145
	I0731 21:29:44.336407 1148013 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:44.336427 1148013 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:44.336498 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.383500 1148013 cri.go:89] found id: ""
	I0731 21:29:44.383591 1148013 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:44.399444 1148013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:44.410687 1148013 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:44.410711 1148013 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:44.410769 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 21:29:44.420845 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:44.420925 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:44.430476 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 21:29:44.440198 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:44.440277 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:44.450195 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.459883 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:44.459966 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.470649 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 21:29:44.480689 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:44.480764 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:44.490628 1148013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:44.501343 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:44.642878 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.555233 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.766976 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.832896 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.907410 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:45.907508 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.354282 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:42.354765 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:42.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:42.354694 1148782 retry.go:31] will retry after 1.044468751s: waiting for machine to come up
	I0731 21:29:43.400835 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:43.401345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:43.401402 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:43.401318 1148782 retry.go:31] will retry after 935.157631ms: waiting for machine to come up
	I0731 21:29:44.337853 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:44.338472 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:44.338505 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:44.338397 1148782 retry.go:31] will retry after 1.530891122s: waiting for machine to come up
	I0731 21:29:45.871035 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:45.871693 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:45.871734 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:45.871617 1148782 retry.go:31] will retry after 1.996010352s: waiting for machine to come up
	I0731 21:29:45.765589 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:47.765743 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:45.100963 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:45.601355 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.101354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.601416 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.100953 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.601551 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.100775 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.601528 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.101362 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.601101 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.407820 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.907790 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.924949 1148013 api_server.go:72] duration metric: took 1.017537991s to wait for apiserver process to appear ...
	I0731 21:29:46.924989 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:46.925016 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:49.933387 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:49.933431 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:49.933448 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.002123 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.002156 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.425320 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.430430 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.430465 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.926039 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.931251 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.931286 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:51.425157 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:51.430486 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:29:51.437067 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:51.437115 1148013 api_server.go:131] duration metric: took 4.512116778s to wait for apiserver health ...
	I0731 21:29:51.437131 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:51.437142 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:51.438770 1148013 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:47.869470 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:47.869928 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:47.869960 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:47.869867 1148782 retry.go:31] will retry after 1.758316686s: waiting for machine to come up
	I0731 21:29:49.630515 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:49.631000 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:49.631036 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:49.630936 1148782 retry.go:31] will retry after 2.39654611s: waiting for machine to come up
	I0731 21:29:51.440057 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:51.460432 1148013 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:51.479629 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:51.491000 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:51.491059 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:51.491076 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:51.491087 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:51.491110 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:51.491124 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:51.491139 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:51.491150 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:51.491222 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:51.491236 1148013 system_pods.go:74] duration metric: took 11.579003ms to wait for pod list to return data ...
	I0731 21:29:51.491252 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:51.495163 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:51.495206 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:51.495239 1148013 node_conditions.go:105] duration metric: took 3.977024ms to run NodePressure ...
	I0731 21:29:51.495263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:51.762752 1148013 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768504 1148013 kubeadm.go:739] kubelet initialised
	I0731 21:29:51.768541 1148013 kubeadm.go:740] duration metric: took 5.756089ms waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768554 1148013 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:51.776242 1148013 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.783488 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783533 1148013 pod_ready.go:81] duration metric: took 7.250424ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.783547 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783558 1148013 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.790100 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790143 1148013 pod_ready.go:81] duration metric: took 6.573129ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.790159 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.797457 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797498 1148013 pod_ready.go:81] duration metric: took 7.319359ms for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.797513 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797533 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.883109 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883149 1148013 pod_ready.go:81] duration metric: took 85.605451ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.883162 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.283454 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283484 1148013 pod_ready.go:81] duration metric: took 400.306586ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.283495 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283511 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.682926 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682965 1148013 pod_ready.go:81] duration metric: took 399.442627ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.682982 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682991 1148013 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:53.083528 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083573 1148013 pod_ready.go:81] duration metric: took 400.571455ms for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:53.083590 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083601 1148013 pod_ready.go:38] duration metric: took 1.315033985s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:53.083623 1148013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:29:53.095349 1148013 ops.go:34] apiserver oom_adj: -16
	I0731 21:29:53.095379 1148013 kubeadm.go:597] duration metric: took 8.785172139s to restartPrimaryControlPlane
	I0731 21:29:53.095391 1148013 kubeadm.go:394] duration metric: took 8.832597905s to StartCluster
	I0731 21:29:53.095416 1148013 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.095513 1148013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:53.097384 1148013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.097693 1148013 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:29:53.097768 1148013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:29:53.097863 1148013 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097905 1148013 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.097914 1148013 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:29:53.097918 1148013 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097949 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.097956 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:53.097964 1148013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-755535"
	I0731 21:29:53.097960 1148013 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.098052 1148013 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.098070 1148013 addons.go:243] addon metrics-server should already be in state true
	I0731 21:29:53.098129 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.098364 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098389 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098405 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098465 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098544 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098578 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.099612 1148013 out.go:177] * Verifying Kubernetes components...
	I0731 21:29:53.100943 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:53.116043 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0731 21:29:53.116121 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0731 21:29:53.116663 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.116670 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.117278 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117297 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117558 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117575 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117662 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.118320 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.118358 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.118788 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0731 21:29:53.118820 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.119468 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.119498 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.119509 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.120181 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.120208 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.120626 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.120828 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.125024 1148013 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.125051 1148013 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:29:53.125087 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.125470 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.125510 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.136521 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0731 21:29:53.137246 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.137866 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.137907 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.138331 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.138574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.140269 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0731 21:29:53.140615 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.140722 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.141377 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.141402 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.141846 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.142108 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.142832 1148013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:53.143979 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0731 21:29:53.144037 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.144302 1148013 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.144321 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:29:53.144342 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.145270 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.145539 1148013 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:29:49.766048 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:52.266842 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:53.145875 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.145898 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.146651 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.146842 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:29:53.146863 1148013 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:29:53.146891 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.147198 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.147235 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.148082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149156 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.149247 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149438 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.149635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.149758 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.149890 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.150082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150593 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.150624 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150825 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.151024 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.151193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.151423 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.164594 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0731 21:29:53.165088 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.165634 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.165649 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.165919 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.166093 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.167775 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.168002 1148013 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.168016 1148013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:29:53.168032 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.171696 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172236 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.172266 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172492 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.172717 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.172890 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.173081 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.313528 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:53.332410 1148013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:29:53.467443 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.481915 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:29:53.481943 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:29:53.503095 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.524005 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:29:53.524039 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:29:53.577476 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:53.577511 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:29:53.630711 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:54.451991 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452029 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452078 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452115 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452387 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452404 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452412 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452526 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452551 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452565 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452574 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452582 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452667 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452684 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452691 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452849 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452869 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.458865 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.458888 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.459191 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.459208 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472307 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472337 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.472690 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.472706 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.472713 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472733 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472742 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.473021 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.473070 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.473074 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.473086 1148013 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-755535"
	I0731 21:29:54.474920 1148013 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:29:50.101380 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:50.601347 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.101325 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.601381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.101364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.600852 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.101284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.601020 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.101330 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.601310 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.476085 1148013 addons.go:510] duration metric: took 1.378326564s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:29:55.338873 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.029262 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:52.029780 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:52.029807 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:52.029695 1148782 retry.go:31] will retry after 2.74211918s: waiting for machine to come up
	I0731 21:29:54.773318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.773762 1146656 main.go:141] libmachine: (no-preload-018891) Found IP for machine: 192.168.61.246
	I0731 21:29:54.773788 1146656 main.go:141] libmachine: (no-preload-018891) Reserving static IP address...
	I0731 21:29:54.773803 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has current primary IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.774221 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.774260 1146656 main.go:141] libmachine: (no-preload-018891) DBG | skip adding static IP to network mk-no-preload-018891 - found existing host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"}
	I0731 21:29:54.774275 1146656 main.go:141] libmachine: (no-preload-018891) Reserved static IP address: 192.168.61.246
	I0731 21:29:54.774320 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Getting to WaitForSSH function...
	I0731 21:29:54.774343 1146656 main.go:141] libmachine: (no-preload-018891) Waiting for SSH to be available...
	I0731 21:29:54.776952 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.777352 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777426 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH client type: external
	I0731 21:29:54.777466 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa (-rw-------)
	I0731 21:29:54.777506 1146656 main.go:141] libmachine: (no-preload-018891) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:54.777522 1146656 main.go:141] libmachine: (no-preload-018891) DBG | About to run SSH command:
	I0731 21:29:54.777564 1146656 main.go:141] libmachine: (no-preload-018891) DBG | exit 0
	I0731 21:29:54.908253 1146656 main.go:141] libmachine: (no-preload-018891) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:54.908614 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetConfigRaw
	I0731 21:29:54.909339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:54.911937 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.912345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912621 1146656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/config.json ...
	I0731 21:29:54.912837 1146656 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:54.912858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:54.913092 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:54.915328 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915698 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.915725 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915862 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:54.916060 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916209 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916385 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:54.916563 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:54.916797 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:54.916812 1146656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:55.032674 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:55.032715 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033152 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:29:55.033189 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033429 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.036142 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036488 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.036553 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036710 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.036938 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037373 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.037586 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.037851 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.037869 1146656 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-018891 && echo "no-preload-018891" | sudo tee /etc/hostname
	I0731 21:29:55.170895 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-018891
	
	I0731 21:29:55.170923 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.174018 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174357 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.174382 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174594 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.174835 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175025 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175153 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.175333 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.175578 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.175595 1146656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018891/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:55.296570 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:55.296606 1146656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:55.296634 1146656 buildroot.go:174] setting up certificates
	I0731 21:29:55.296645 1146656 provision.go:84] configureAuth start
	I0731 21:29:55.296658 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.297022 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:55.299891 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300300 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.300329 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300525 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.302808 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303146 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.303176 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303306 1146656 provision.go:143] copyHostCerts
	I0731 21:29:55.303365 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:55.303375 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:55.303430 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:55.303533 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:55.303541 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:55.303565 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:55.303638 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:55.303645 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:55.303662 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:55.303773 1146656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.no-preload-018891 san=[127.0.0.1 192.168.61.246 localhost minikube no-preload-018891]
	I0731 21:29:55.451740 1146656 provision.go:177] copyRemoteCerts
	I0731 21:29:55.451822 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:55.451858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.454972 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455327 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.455362 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455522 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.455783 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.455966 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.456166 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.541939 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:29:55.567967 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:55.593630 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:55.621511 1146656 provision.go:87] duration metric: took 324.845258ms to configureAuth
	I0731 21:29:55.621546 1146656 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:55.621737 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:29:55.621823 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.624639 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625021 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.625054 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625277 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.625515 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625755 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625921 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.626150 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.626404 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.626428 1146656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:55.896753 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:55.896785 1146656 machine.go:97] duration metric: took 983.934543ms to provisionDockerMachine
	I0731 21:29:55.896799 1146656 start.go:293] postStartSetup for "no-preload-018891" (driver="kvm2")
	I0731 21:29:55.896818 1146656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:55.896863 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:55.897196 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:55.897229 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.899769 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900156 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.900190 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.900612 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.900765 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.900903 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.987436 1146656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:55.991924 1146656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:55.991958 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:55.992027 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:55.992144 1146656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:55.992312 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:56.002524 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:56.026998 1146656 start.go:296] duration metric: took 130.182157ms for postStartSetup
	I0731 21:29:56.027046 1146656 fix.go:56] duration metric: took 18.009977848s for fixHost
	I0731 21:29:56.027071 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.029907 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030303 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.030324 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030493 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.030731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.030907 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.031055 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.031254 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:56.031490 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:56.031503 1146656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:56.149163 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461396.115095611
	
	I0731 21:29:56.149199 1146656 fix.go:216] guest clock: 1722461396.115095611
	I0731 21:29:56.149211 1146656 fix.go:229] Guest: 2024-07-31 21:29:56.115095611 +0000 UTC Remote: 2024-07-31 21:29:56.027049922 +0000 UTC m=+369.298206393 (delta=88.045689ms)
	I0731 21:29:56.149267 1146656 fix.go:200] guest clock delta is within tolerance: 88.045689ms
	I0731 21:29:56.149294 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 18.13224564s
	I0731 21:29:56.149320 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.149597 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:56.152941 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153307 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.153359 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153492 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154130 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154353 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154450 1146656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:56.154497 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.154650 1146656 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:56.154678 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.157376 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157795 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157838 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.157858 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158006 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158227 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158396 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.158422 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.158421 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158568 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158646 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.158731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158879 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.159051 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.241170 1146656 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:56.259519 1146656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:56.414823 1146656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:56.420732 1146656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:56.420805 1146656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:56.438423 1146656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:56.438461 1146656 start.go:495] detecting cgroup driver to use...
	I0731 21:29:56.438567 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:56.456069 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:56.471320 1146656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:56.471399 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:56.486206 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:56.501601 1146656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:56.623367 1146656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:56.774879 1146656 docker.go:233] disabling docker service ...
	I0731 21:29:56.774969 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:56.792295 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:56.809957 1146656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:56.961634 1146656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:57.102957 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:57.118907 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:57.139231 1146656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:29:57.139301 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.150471 1146656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:57.150547 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.160951 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.171556 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.182777 1146656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:57.196310 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.209689 1146656 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.227660 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.238058 1146656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:57.248326 1146656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:57.248388 1146656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:57.261076 1146656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:57.272002 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:57.406445 1146656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:57.540657 1146656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:57.540765 1146656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:57.546161 1146656 start.go:563] Will wait 60s for crictl version
	I0731 21:29:57.546233 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.550021 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:57.589152 1146656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:57.589272 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.618944 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.650646 1146656 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:29:54.766019 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:57.264179 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:59.264724 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:55.101321 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:55.600950 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.100785 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.601322 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.101431 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.101425 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.600958 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.100876 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.601349 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.837038 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.336837 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.836595 1148013 node_ready.go:49] node "default-k8s-diff-port-755535" has status "Ready":"True"
	I0731 21:30:00.836632 1148013 node_ready.go:38] duration metric: took 7.504184626s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:30:00.836644 1148013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:00.841523 1148013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846346 1148013 pod_ready.go:92] pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.846372 1148013 pod_ready.go:81] duration metric: took 4.815855ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846383 1148013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851118 1148013 pod_ready.go:92] pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.851140 1148013 pod_ready.go:81] duration metric: took 4.751019ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851151 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:57.651874 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:57.655070 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655529 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:57.655572 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655778 1146656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:57.659917 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:57.673863 1146656 kubeadm.go:883] updating cluster {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:57.674037 1146656 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:29:57.674099 1146656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:57.714187 1146656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:29:57.714225 1146656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:57.714285 1146656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.714317 1146656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.714345 1146656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.714370 1146656 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.714378 1146656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.714348 1146656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.714420 1146656 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.714458 1146656 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716109 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.716123 1146656 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.716147 1146656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.716161 1146656 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716168 1146656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.716119 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.716527 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.716549 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.848967 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.869777 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.881111 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 21:29:57.888022 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.892714 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.893611 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.908421 1146656 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 21:29:57.908493 1146656 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.908554 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.914040 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.985691 1146656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 21:29:57.985757 1146656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.985814 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.128813 1146656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 21:29:58.128930 1146656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.128947 1146656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 21:29:58.128996 1146656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.129046 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129061 1146656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 21:29:58.129088 1146656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.129115 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129000 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129194 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:58.129262 1146656 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 21:29:58.129309 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:58.129312 1146656 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.129389 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.141411 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.141477 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.212758 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 21:29:58.212783 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212847 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.212860 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.212928 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212933 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:29:58.226942 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227020 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.227057 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227113 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.265352 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 21:29:58.265470 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:29:58.276064 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276115 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276128 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 21:29:58.276150 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276176 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276186 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276213 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 21:29:58.276248 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.276359 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.280583 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 21:29:58.363934 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.050742 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.774531298s)
	I0731 21:30:01.050793 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 21:30:01.050832 1146656 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050926 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050839 1146656 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.686857972s)
	I0731 21:30:01.051031 1146656 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 21:30:01.051073 1146656 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.051118 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:30:01.266241 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:03.764462 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:00.101336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:00.601036 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.101381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.601371 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.100649 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.601354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.101316 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.101099 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.601146 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.860276 1148013 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:04.360452 1148013 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.360479 1148013 pod_ready.go:81] duration metric: took 3.509320908s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.360496 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367733 1148013 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.367757 1148013 pod_ready.go:81] duration metric: took 7.253266ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367768 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372693 1148013 pod_ready.go:92] pod "kube-proxy-mqcmt" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.372719 1148013 pod_ready.go:81] duration metric: took 4.944626ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372728 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436318 1148013 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.436345 1148013 pod_ready.go:81] duration metric: took 63.609569ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436356 1148013 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.339084 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.288125508s)
	I0731 21:30:04.339126 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 21:30:04.339141 1146656 ssh_runner.go:235] Completed: which crictl: (3.288000381s)
	I0731 21:30:04.339164 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:04.339223 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:04.339234 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:06.225796 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886536121s)
	I0731 21:30:06.225852 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 21:30:06.225875 1146656 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.886627424s)
	I0731 21:30:06.225900 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.225933 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 21:30:06.225987 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.226038 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:05.764555 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:07.766002 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:05.100624 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:05.600680 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.101286 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.601308 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.100801 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.600703 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.101252 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.601341 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.101049 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.601284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.443235 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.444797 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.950200 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.198750 1146656 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972673111s)
	I0731 21:30:08.198802 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 21:30:08.198831 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.972821334s)
	I0731 21:30:08.198850 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 21:30:08.198878 1146656 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:08.198956 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:10.054141 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.855149734s)
	I0731 21:30:10.054181 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 21:30:10.054209 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:10.054263 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:11.506212 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.45191421s)
	I0731 21:30:11.506252 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 21:30:11.506294 1146656 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:11.506390 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:10.263896 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.264903 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:14.265574 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.100825 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:10.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.101377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.601357 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.100679 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.600724 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.101278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.600992 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.101359 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.601364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.443063 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:15.443624 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.356725 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 21:30:12.356768 1146656 cache_images.go:123] Successfully loaded all cached images
	I0731 21:30:12.356773 1146656 cache_images.go:92] duration metric: took 14.642536081s to LoadCachedImages
	I0731 21:30:12.356786 1146656 kubeadm.go:934] updating node { 192.168.61.246 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:30:12.356931 1146656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-018891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:30:12.357036 1146656 ssh_runner.go:195] Run: crio config
	I0731 21:30:12.404684 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:12.404711 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:12.404728 1146656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:30:12.404752 1146656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.246 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018891 NodeName:no-preload-018891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:30:12.404917 1146656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018891"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:30:12.404999 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:30:12.416421 1146656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:30:12.416516 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:30:12.426572 1146656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 21:30:12.444613 1146656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:30:12.461161 1146656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 21:30:12.478872 1146656 ssh_runner.go:195] Run: grep 192.168.61.246	control-plane.minikube.internal$ /etc/hosts
	I0731 21:30:12.482736 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:30:12.502603 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:12.617670 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:12.634477 1146656 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891 for IP: 192.168.61.246
	I0731 21:30:12.634508 1146656 certs.go:194] generating shared ca certs ...
	I0731 21:30:12.634532 1146656 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:12.634740 1146656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:30:12.634799 1146656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:30:12.634813 1146656 certs.go:256] generating profile certs ...
	I0731 21:30:12.634961 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.key
	I0731 21:30:12.635052 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key.54e88c10
	I0731 21:30:12.635108 1146656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key
	I0731 21:30:12.635312 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:30:12.635379 1146656 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:30:12.635394 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:30:12.635433 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:30:12.635465 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:30:12.635500 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:30:12.635557 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:30:12.636406 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:30:12.672156 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:30:12.702346 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:30:12.731602 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:30:12.777601 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:30:12.813409 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:30:12.841076 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:30:12.866418 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:30:12.890716 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:30:12.915792 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:30:12.940826 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:30:12.966374 1146656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:30:12.984533 1146656 ssh_runner.go:195] Run: openssl version
	I0731 21:30:12.990538 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:30:13.002053 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006781 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006862 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.012728 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:30:13.024167 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:30:13.035617 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040041 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040150 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.046193 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:30:13.058141 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:30:13.070085 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074720 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074811 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.080498 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:30:13.092497 1146656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:30:13.097275 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:30:13.103762 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:30:13.110267 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:30:13.118325 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:30:13.124784 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:30:13.131502 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:30:13.138736 1146656 kubeadm.go:392] StartCluster: {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:30:13.138837 1146656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:30:13.138888 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.178222 1146656 cri.go:89] found id: ""
	I0731 21:30:13.178304 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:30:13.188552 1146656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:30:13.188580 1146656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:30:13.188634 1146656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:30:13.198424 1146656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:30:13.199620 1146656 kubeconfig.go:125] found "no-preload-018891" server: "https://192.168.61.246:8443"
	I0731 21:30:13.202067 1146656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:30:13.213244 1146656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.246
	I0731 21:30:13.213286 1146656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:30:13.213303 1146656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:30:13.213719 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.253396 1146656 cri.go:89] found id: ""
	I0731 21:30:13.253478 1146656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:30:13.270269 1146656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:30:13.280405 1146656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:30:13.280431 1146656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:30:13.280479 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:30:13.289979 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:30:13.290047 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:30:13.299871 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:30:13.309257 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:30:13.309342 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:30:13.319593 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.329418 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:30:13.329486 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.339419 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:30:13.348971 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:30:13.349036 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:30:13.358887 1146656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:30:13.368643 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:13.485786 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.401198 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.599529 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.677307 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.765353 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:30:14.765468 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.266329 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.766054 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.786157 1146656 api_server.go:72] duration metric: took 1.020803565s to wait for apiserver process to appear ...
	I0731 21:30:15.786189 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:30:15.786217 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:16.265710 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.766148 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.439856 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.439896 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.439914 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.492649 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.492690 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.787081 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.810263 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:18.810302 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.286734 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.291964 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:19.291999 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.786505 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.796699 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:30:19.807525 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:30:19.807566 1146656 api_server.go:131] duration metric: took 4.02136792s to wait for apiserver health ...
	I0731 21:30:19.807579 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:19.807588 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:19.809353 1146656 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:30:15.101218 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.600733 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.101137 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.601585 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.101343 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.101295 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.601307 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.100682 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.601155 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.942857 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.943771 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.810433 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:30:19.821002 1146656 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:30:19.868402 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:30:19.883129 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:30:19.883180 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:30:19.883192 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:30:19.883204 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:30:19.883212 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:30:19.883221 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:30:19.883229 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:30:19.883242 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:30:19.883261 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:30:19.883274 1146656 system_pods.go:74] duration metric: took 14.843323ms to wait for pod list to return data ...
	I0731 21:30:19.883284 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:30:19.897327 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:30:19.897368 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:30:19.897382 1146656 node_conditions.go:105] duration metric: took 14.091172ms to run NodePressure ...
	I0731 21:30:19.897407 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:20.196896 1146656 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:30:20.202966 1146656 kubeadm.go:739] kubelet initialised
	I0731 21:30:20.202990 1146656 kubeadm.go:740] duration metric: took 6.059782ms waiting for restarted kubelet to initialise ...
	I0731 21:30:20.203000 1146656 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:20.208123 1146656 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.214186 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214236 1146656 pod_ready.go:81] duration metric: took 6.07909ms for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.214247 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214253 1146656 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.220223 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220256 1146656 pod_ready.go:81] duration metric: took 5.988701ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.220267 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220273 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.228507 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228536 1146656 pod_ready.go:81] duration metric: took 8.255655ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.228545 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228553 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.272704 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272743 1146656 pod_ready.go:81] duration metric: took 44.182664ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.272755 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272777 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.673129 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673158 1146656 pod_ready.go:81] duration metric: took 400.361902ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.673170 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673177 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.072429 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072460 1146656 pod_ready.go:81] duration metric: took 399.27644ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.072471 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072478 1146656 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.472593 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472626 1146656 pod_ready.go:81] duration metric: took 400.13982ms for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.472637 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472645 1146656 pod_ready.go:38] duration metric: took 1.26963694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:21.472664 1146656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:30:21.484323 1146656 ops.go:34] apiserver oom_adj: -16
	I0731 21:30:21.484351 1146656 kubeadm.go:597] duration metric: took 8.295763074s to restartPrimaryControlPlane
	I0731 21:30:21.484361 1146656 kubeadm.go:394] duration metric: took 8.34563439s to StartCluster
	I0731 21:30:21.484379 1146656 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.484460 1146656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:30:21.486137 1146656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.486409 1146656 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:30:21.486485 1146656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:30:21.486584 1146656 addons.go:69] Setting storage-provisioner=true in profile "no-preload-018891"
	I0731 21:30:21.486615 1146656 addons.go:234] Setting addon storage-provisioner=true in "no-preload-018891"
	I0731 21:30:21.486646 1146656 addons.go:69] Setting metrics-server=true in profile "no-preload-018891"
	I0731 21:30:21.486692 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:30:21.486707 1146656 addons.go:234] Setting addon metrics-server=true in "no-preload-018891"
	W0731 21:30:21.486718 1146656 addons.go:243] addon metrics-server should already be in state true
	I0731 21:30:21.486759 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	W0731 21:30:21.486664 1146656 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:30:21.486850 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.486615 1146656 addons.go:69] Setting default-storageclass=true in profile "no-preload-018891"
	I0731 21:30:21.486954 1146656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018891"
	I0731 21:30:21.487107 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487150 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487230 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487267 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487371 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487406 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.488066 1146656 out.go:177] * Verifying Kubernetes components...
	I0731 21:30:21.489491 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:21.503876 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0731 21:30:21.504017 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0731 21:30:21.504086 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0731 21:30:21.504598 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504642 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504682 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.505173 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505193 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505199 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505213 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505305 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505327 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505554 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505629 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505639 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505831 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.506154 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506164 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.508914 1146656 addons.go:234] Setting addon default-storageclass=true in "no-preload-018891"
	W0731 21:30:21.508932 1146656 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:30:21.508957 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.509187 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.509213 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.526066 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0731 21:30:21.528731 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.529285 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.529311 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.529784 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.530000 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.532450 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.534700 1146656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:21.536115 1146656 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.536141 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:30:21.536170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.540044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540592 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.540622 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540851 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.541104 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.541270 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.541425 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.547128 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0731 21:30:21.547184 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0731 21:30:21.547786 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.547865 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.548426 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548445 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548429 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548466 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548780 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548845 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548959 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.549425 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.549473 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.551116 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.553068 1146656 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:30:21.554401 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:30:21.554418 1146656 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:30:21.554445 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.557987 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558385 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.558410 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558728 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.558976 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.559164 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.559326 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.569320 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0731 21:30:21.569956 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.570511 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.570534 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.571119 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.571339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.573316 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.573563 1146656 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.573585 1146656 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:30:21.573604 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.576643 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577012 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.577044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577214 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.577511 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.577688 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.577849 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.700050 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:21.717247 1146656 node_ready.go:35] waiting up to 6m0s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:21.798175 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.818043 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:30:21.818078 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:30:21.823805 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.862781 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:30:21.862812 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:30:21.898427 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:21.898457 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:30:21.948766 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:23.027256 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.229032744s)
	I0731 21:30:23.027318 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027322 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203467073s)
	I0731 21:30:23.027367 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027401 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.078593532s)
	I0731 21:30:23.027335 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027442 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027459 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027708 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027714 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027723 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027728 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027732 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027738 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027740 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027746 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027794 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027808 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027818 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.027827 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027991 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028003 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028037 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028056 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028061 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028071 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028081 1146656 addons.go:475] Verifying addon metrics-server=true in "no-preload-018891"
	I0731 21:30:23.028084 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.028119 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.034930 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.034965 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.035312 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.035333 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.035346 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.037042 1146656 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0731 21:30:21.264247 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.264691 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:20.100856 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:20.601336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.101059 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.100791 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.601360 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.600731 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.601285 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.945141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:24.442664 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.038375 1146656 addons.go:510] duration metric: took 1.551892195s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0731 21:30:23.721386 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.721450 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.264972 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:27.266151 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:25.101043 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:25.601045 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.101312 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.600559 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:27.100884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:27.100987 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:27.138104 1147424 cri.go:89] found id: ""
	I0731 21:30:27.138142 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.138154 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:27.138163 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:27.138233 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:27.175030 1147424 cri.go:89] found id: ""
	I0731 21:30:27.175068 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.175080 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:27.175088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:27.175158 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:27.209891 1147424 cri.go:89] found id: ""
	I0731 21:30:27.209925 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.209934 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:27.209941 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:27.209992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:27.247117 1147424 cri.go:89] found id: ""
	I0731 21:30:27.247154 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.247163 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:27.247170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:27.247236 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:27.286595 1147424 cri.go:89] found id: ""
	I0731 21:30:27.286625 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.286633 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:27.286639 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:27.286695 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:27.321169 1147424 cri.go:89] found id: ""
	I0731 21:30:27.321201 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.321218 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:27.321226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:27.321310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:27.356278 1147424 cri.go:89] found id: ""
	I0731 21:30:27.356306 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.356317 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:27.356323 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:27.356386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:27.390351 1147424 cri.go:89] found id: ""
	I0731 21:30:27.390378 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.390387 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:27.390398 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:27.390412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:27.440412 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:27.440451 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:27.454295 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:27.454330 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:27.575971 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:27.575999 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:27.576018 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:27.639090 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:27.639141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:26.442847 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.943311 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.221333 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:29.221116 1146656 node_ready.go:49] node "no-preload-018891" has status "Ready":"True"
	I0731 21:30:29.221150 1146656 node_ready.go:38] duration metric: took 7.50385465s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:29.221161 1146656 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:29.226655 1146656 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:31.233713 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:29.764835 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:31.764914 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:34.264305 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:30.177467 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:30.191103 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:30.191179 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:30.226529 1147424 cri.go:89] found id: ""
	I0731 21:30:30.226575 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.226584 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:30.226591 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:30.226653 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:30.262162 1147424 cri.go:89] found id: ""
	I0731 21:30:30.262193 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.262202 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:30.262209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:30.262275 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:30.301663 1147424 cri.go:89] found id: ""
	I0731 21:30:30.301698 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.301706 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:30.301713 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:30.301769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:30.342073 1147424 cri.go:89] found id: ""
	I0731 21:30:30.342105 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.342117 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:30.342125 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:30.342199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:30.375980 1147424 cri.go:89] found id: ""
	I0731 21:30:30.376013 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.376024 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:30.376033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:30.376114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:30.409852 1147424 cri.go:89] found id: ""
	I0731 21:30:30.409892 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.409900 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:30.409907 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:30.409960 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:30.444551 1147424 cri.go:89] found id: ""
	I0731 21:30:30.444592 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.444604 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:30.444612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:30.444672 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:30.481953 1147424 cri.go:89] found id: ""
	I0731 21:30:30.481987 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.481995 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:30.482006 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:30.482024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:30.533740 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:30.533785 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:30.546789 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:30.546831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:30.622294 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:30.622321 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:30.622338 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:30.693871 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:30.693922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.236318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:33.249452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:33.249545 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:33.288064 1147424 cri.go:89] found id: ""
	I0731 21:30:33.288110 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.288124 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:33.288133 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:33.288208 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:33.321269 1147424 cri.go:89] found id: ""
	I0731 21:30:33.321298 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.321307 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:33.321313 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:33.321368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:33.357078 1147424 cri.go:89] found id: ""
	I0731 21:30:33.357125 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.357133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:33.357140 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:33.357206 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:33.393556 1147424 cri.go:89] found id: ""
	I0731 21:30:33.393587 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.393598 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:33.393608 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:33.393674 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:33.427311 1147424 cri.go:89] found id: ""
	I0731 21:30:33.427347 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.427359 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:33.427368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:33.427438 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:33.462424 1147424 cri.go:89] found id: ""
	I0731 21:30:33.462463 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.462474 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:33.462482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:33.462557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:33.499271 1147424 cri.go:89] found id: ""
	I0731 21:30:33.499302 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.499311 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:33.499320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:33.499395 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:33.536341 1147424 cri.go:89] found id: ""
	I0731 21:30:33.536372 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.536382 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:33.536392 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:33.536406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:33.606582 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:33.606621 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:33.606640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:33.682704 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:33.682757 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.722410 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:33.722456 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:33.778845 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:33.778888 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:31.442470 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.443996 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:35.944317 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.735206 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.234503 1146656 pod_ready.go:92] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.234535 1146656 pod_ready.go:81] duration metric: took 7.007846047s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.234557 1146656 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240361 1146656 pod_ready.go:92] pod "etcd-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.240396 1146656 pod_ready.go:81] duration metric: took 5.830601ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240410 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246667 1146656 pod_ready.go:92] pod "kube-apiserver-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.246697 1146656 pod_ready.go:81] duration metric: took 6.278754ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246707 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252616 1146656 pod_ready.go:92] pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.252646 1146656 pod_ready.go:81] duration metric: took 5.931893ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252657 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257929 1146656 pod_ready.go:92] pod "kube-proxy-x2dnn" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.257962 1146656 pod_ready.go:81] duration metric: took 5.298921ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257976 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632686 1146656 pod_ready.go:92] pod "kube-scheduler-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.632723 1146656 pod_ready.go:81] duration metric: took 374.739035ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632737 1146656 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.265196 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.265807 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.293569 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:36.311120 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:36.311235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:36.350558 1147424 cri.go:89] found id: ""
	I0731 21:30:36.350589 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.350596 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:36.350602 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:36.350655 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:36.387804 1147424 cri.go:89] found id: ""
	I0731 21:30:36.387841 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.387849 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:36.387855 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:36.387912 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:36.427225 1147424 cri.go:89] found id: ""
	I0731 21:30:36.427263 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.427273 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:36.427280 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:36.427367 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:36.470864 1147424 cri.go:89] found id: ""
	I0731 21:30:36.470896 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.470908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:36.470917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:36.470985 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:36.523075 1147424 cri.go:89] found id: ""
	I0731 21:30:36.523109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.523117 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:36.523124 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:36.523188 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:36.598071 1147424 cri.go:89] found id: ""
	I0731 21:30:36.598109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.598120 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:36.598129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:36.598200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:36.638277 1147424 cri.go:89] found id: ""
	I0731 21:30:36.638314 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.638326 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:36.638335 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:36.638402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:36.673112 1147424 cri.go:89] found id: ""
	I0731 21:30:36.673152 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.673164 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:36.673180 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:36.673197 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:36.728197 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:36.728245 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:36.742034 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:36.742072 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:36.815584 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:36.815617 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:36.815635 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:36.894418 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:36.894464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.436637 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:39.449708 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:39.449823 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:39.490244 1147424 cri.go:89] found id: ""
	I0731 21:30:39.490281 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.490293 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:39.490301 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:39.490365 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:39.523568 1147424 cri.go:89] found id: ""
	I0731 21:30:39.523601 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.523625 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:39.523640 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:39.523723 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:39.558966 1147424 cri.go:89] found id: ""
	I0731 21:30:39.559004 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.559017 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:39.559025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:39.559092 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:39.592002 1147424 cri.go:89] found id: ""
	I0731 21:30:39.592037 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.592049 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:39.592058 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:39.592145 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:39.624596 1147424 cri.go:89] found id: ""
	I0731 21:30:39.624634 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.624646 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:39.624655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:39.624722 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:39.658928 1147424 cri.go:89] found id: ""
	I0731 21:30:39.658957 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.658965 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:39.658973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:39.659024 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:39.692725 1147424 cri.go:89] found id: ""
	I0731 21:30:39.692766 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.692779 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:39.692788 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:39.692857 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:39.728770 1147424 cri.go:89] found id: ""
	I0731 21:30:39.728811 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.728823 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:39.728837 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:39.728854 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:39.799162 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:39.799193 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:39.799213 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:38.443560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.942937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.638956 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.640407 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.764748 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.765335 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:39.884581 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:39.884625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.923650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:39.923687 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:39.977735 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:39.977787 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.491668 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:42.513530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:42.513623 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:42.563932 1147424 cri.go:89] found id: ""
	I0731 21:30:42.563968 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.563982 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:42.563991 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:42.564067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:42.598089 1147424 cri.go:89] found id: ""
	I0731 21:30:42.598122 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.598131 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:42.598138 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:42.598199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:42.631493 1147424 cri.go:89] found id: ""
	I0731 21:30:42.631528 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.631540 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:42.631549 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:42.631626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:42.668358 1147424 cri.go:89] found id: ""
	I0731 21:30:42.668395 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.668408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:42.668416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:42.668484 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:42.701115 1147424 cri.go:89] found id: ""
	I0731 21:30:42.701150 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.701161 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:42.701170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:42.701248 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:42.736626 1147424 cri.go:89] found id: ""
	I0731 21:30:42.736665 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.736678 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:42.736687 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:42.736759 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:42.769864 1147424 cri.go:89] found id: ""
	I0731 21:30:42.769897 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.769904 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:42.769910 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:42.769964 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:42.803441 1147424 cri.go:89] found id: ""
	I0731 21:30:42.803477 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.803486 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:42.803497 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:42.803514 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.817556 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:42.817591 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:42.885011 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:42.885040 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:42.885055 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:42.964799 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:42.964851 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:43.015621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:43.015675 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:42.942984 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.943126 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.641436 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.139036 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.766405 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:46.766520 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.265061 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.568268 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:45.580867 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:45.580952 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:45.614028 1147424 cri.go:89] found id: ""
	I0731 21:30:45.614066 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.614076 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:45.614082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:45.614152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:45.650207 1147424 cri.go:89] found id: ""
	I0731 21:30:45.650235 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.650245 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:45.650254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:45.650321 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:45.684405 1147424 cri.go:89] found id: ""
	I0731 21:30:45.684433 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.684444 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:45.684452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:45.684540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:45.718355 1147424 cri.go:89] found id: ""
	I0731 21:30:45.718397 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.718408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:45.718416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:45.718501 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:45.755484 1147424 cri.go:89] found id: ""
	I0731 21:30:45.755532 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.755554 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:45.755563 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:45.755638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:45.791243 1147424 cri.go:89] found id: ""
	I0731 21:30:45.791277 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.791290 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:45.791298 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:45.791368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:45.827118 1147424 cri.go:89] found id: ""
	I0731 21:30:45.827157 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.827169 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:45.827177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:45.827244 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:45.866131 1147424 cri.go:89] found id: ""
	I0731 21:30:45.866166 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.866177 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:45.866191 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:45.866207 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:45.919945 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:45.919988 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:45.935650 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:45.935685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:46.008387 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:46.008417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:46.008437 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:46.087063 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:46.087119 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:48.626079 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:48.639423 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:48.639502 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:48.673340 1147424 cri.go:89] found id: ""
	I0731 21:30:48.673371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.673380 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:48.673388 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:48.673457 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:48.707662 1147424 cri.go:89] found id: ""
	I0731 21:30:48.707694 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.707704 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:48.707712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:48.707786 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:48.741679 1147424 cri.go:89] found id: ""
	I0731 21:30:48.741716 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.741728 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:48.741736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:48.741807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:48.780939 1147424 cri.go:89] found id: ""
	I0731 21:30:48.780969 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.780980 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:48.780987 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:48.781050 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:48.818882 1147424 cri.go:89] found id: ""
	I0731 21:30:48.818912 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.818920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:48.818927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:48.818982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:48.858012 1147424 cri.go:89] found id: ""
	I0731 21:30:48.858044 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.858056 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:48.858065 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:48.858140 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:48.894753 1147424 cri.go:89] found id: ""
	I0731 21:30:48.894787 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.894795 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:48.894802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:48.894863 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:48.927020 1147424 cri.go:89] found id: ""
	I0731 21:30:48.927056 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.927066 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:48.927078 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:48.927099 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:48.983634 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:48.983678 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:48.998249 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:48.998280 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:49.068981 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:49.069006 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:49.069024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:49.154613 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:49.154658 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:46.943398 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:48.953937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:47.139335 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.139858 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.139967 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.764837 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.265088 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.693023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:51.706145 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:51.706246 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:51.737003 1147424 cri.go:89] found id: ""
	I0731 21:30:51.737032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.737041 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:51.737046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:51.737114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:51.772405 1147424 cri.go:89] found id: ""
	I0731 21:30:51.772441 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.772452 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:51.772461 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:51.772518 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.805868 1147424 cri.go:89] found id: ""
	I0731 21:30:51.805900 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.805910 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:51.805918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:51.805986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:51.841996 1147424 cri.go:89] found id: ""
	I0731 21:30:51.842032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.842045 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:51.842054 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:51.842130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:51.874698 1147424 cri.go:89] found id: ""
	I0731 21:30:51.874734 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.874746 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:51.874755 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:51.874824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:51.908924 1147424 cri.go:89] found id: ""
	I0731 21:30:51.908955 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.908967 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:51.908973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:51.909037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:51.945056 1147424 cri.go:89] found id: ""
	I0731 21:30:51.945085 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.945096 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:51.945104 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:51.945167 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:51.979480 1147424 cri.go:89] found id: ""
	I0731 21:30:51.979513 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.979538 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:51.979552 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:51.979571 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:52.055960 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:52.055992 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:52.056009 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:52.132988 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:52.133039 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:52.172054 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:52.172098 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:52.226311 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:52.226355 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:54.741919 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:54.755241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:54.755319 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:54.789532 1147424 cri.go:89] found id: ""
	I0731 21:30:54.789563 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.789574 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:54.789583 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:54.789652 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:54.824196 1147424 cri.go:89] found id: ""
	I0731 21:30:54.824229 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.824240 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:54.824248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:54.824314 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.443199 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.944480 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.140181 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:55.144767 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:56.265184 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.765513 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.860579 1147424 cri.go:89] found id: ""
	I0731 21:30:54.860611 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.860620 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:54.860627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:54.860679 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:54.897438 1147424 cri.go:89] found id: ""
	I0731 21:30:54.897472 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.897484 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:54.897493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:54.897569 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:54.935283 1147424 cri.go:89] found id: ""
	I0731 21:30:54.935318 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.935330 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:54.935339 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:54.935409 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:54.970819 1147424 cri.go:89] found id: ""
	I0731 21:30:54.970850 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.970858 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:54.970865 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:54.970916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:55.004983 1147424 cri.go:89] found id: ""
	I0731 21:30:55.005019 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.005029 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:55.005038 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:55.005111 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:55.040711 1147424 cri.go:89] found id: ""
	I0731 21:30:55.040740 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.040749 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:55.040760 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:55.040774 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:55.117255 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:55.117290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:55.117308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:55.195423 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:55.195466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:55.234017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:55.234050 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:55.287518 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:55.287562 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:57.802888 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:57.816049 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:57.816152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:57.849582 1147424 cri.go:89] found id: ""
	I0731 21:30:57.849616 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.849627 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:57.849635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:57.849713 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:57.883334 1147424 cri.go:89] found id: ""
	I0731 21:30:57.883371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.883382 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:57.883391 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:57.883459 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:57.917988 1147424 cri.go:89] found id: ""
	I0731 21:30:57.918018 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.918028 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:57.918034 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:57.918095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:57.956169 1147424 cri.go:89] found id: ""
	I0731 21:30:57.956205 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.956217 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:57.956229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:57.956296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:57.992259 1147424 cri.go:89] found id: ""
	I0731 21:30:57.992291 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.992301 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:57.992308 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:57.992371 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:58.027969 1147424 cri.go:89] found id: ""
	I0731 21:30:58.027996 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.028006 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:58.028013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:58.028065 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:58.063018 1147424 cri.go:89] found id: ""
	I0731 21:30:58.063048 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.063057 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:58.063064 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:58.063117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:58.097096 1147424 cri.go:89] found id: ""
	I0731 21:30:58.097131 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.097143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:58.097158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:58.097175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:58.137311 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:58.137341 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:58.186533 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:58.186575 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:58.200436 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:58.200469 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:58.270006 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:58.270033 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:58.270053 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:56.444446 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.942906 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.943227 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:57.639057 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.140108 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:01.265139 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:03.266080 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.855423 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:00.868032 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:00.868128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:00.901453 1147424 cri.go:89] found id: ""
	I0731 21:31:00.901486 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.901498 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:00.901506 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:00.901586 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:00.940566 1147424 cri.go:89] found id: ""
	I0731 21:31:00.940598 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.940614 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:00.940623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:00.940693 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:00.975729 1147424 cri.go:89] found id: ""
	I0731 21:31:00.975767 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.975778 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:00.975785 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:00.975852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:01.010713 1147424 cri.go:89] found id: ""
	I0731 21:31:01.010747 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.010759 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:01.010768 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:01.010842 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:01.044675 1147424 cri.go:89] found id: ""
	I0731 21:31:01.044709 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.044718 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:01.044725 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:01.044785 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:01.078574 1147424 cri.go:89] found id: ""
	I0731 21:31:01.078614 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.078625 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:01.078634 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:01.078696 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:01.116013 1147424 cri.go:89] found id: ""
	I0731 21:31:01.116051 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.116062 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:01.116071 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:01.116161 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:01.152596 1147424 cri.go:89] found id: ""
	I0731 21:31:01.152631 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.152640 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:01.152650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:01.152666 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:01.203674 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:01.203726 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:01.218212 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:01.218261 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:01.290579 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:01.290604 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:01.290621 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:01.369885 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:01.369929 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:03.910280 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:03.923195 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:03.923276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:03.958378 1147424 cri.go:89] found id: ""
	I0731 21:31:03.958411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.958420 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:03.958427 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:03.958496 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:03.993096 1147424 cri.go:89] found id: ""
	I0731 21:31:03.993128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.993139 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:03.993148 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:03.993219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:04.029519 1147424 cri.go:89] found id: ""
	I0731 21:31:04.029552 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.029561 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:04.029569 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:04.029625 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:04.065597 1147424 cri.go:89] found id: ""
	I0731 21:31:04.065633 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.065643 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:04.065652 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:04.065719 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:04.101708 1147424 cri.go:89] found id: ""
	I0731 21:31:04.101744 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.101755 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:04.101763 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:04.101835 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:04.137732 1147424 cri.go:89] found id: ""
	I0731 21:31:04.137773 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.137783 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:04.137792 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:04.137866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:04.173141 1147424 cri.go:89] found id: ""
	I0731 21:31:04.173173 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.173188 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:04.173197 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:04.173269 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:04.208707 1147424 cri.go:89] found id: ""
	I0731 21:31:04.208742 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.208753 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:04.208770 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:04.208789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:04.279384 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:04.279417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:04.279498 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:04.362158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:04.362203 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:04.401372 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:04.401412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:04.453988 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:04.454047 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:03.443745 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.942529 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:02.639283 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:04.639372 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.765358 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:08.265854 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:06.968373 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:06.982182 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:06.982268 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:07.018082 1147424 cri.go:89] found id: ""
	I0731 21:31:07.018112 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.018122 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:07.018129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:07.018197 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:07.050272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.050309 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.050319 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:07.050325 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:07.050392 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:07.085174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.085206 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.085215 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:07.085221 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:07.085285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:07.119239 1147424 cri.go:89] found id: ""
	I0731 21:31:07.119274 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.119282 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:07.119289 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:07.119353 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:07.156846 1147424 cri.go:89] found id: ""
	I0731 21:31:07.156876 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.156883 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:07.156889 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:07.156942 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:07.191272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.191305 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.191314 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:07.191320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:07.191384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:07.231174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.231209 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.231221 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:07.231231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:07.231295 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:07.266525 1147424 cri.go:89] found id: ""
	I0731 21:31:07.266551 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.266558 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:07.266567 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:07.266589 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:07.306626 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:07.306659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:07.360568 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:07.360625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:07.374630 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:07.374665 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:07.444054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:07.444081 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:07.444118 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:07.942657 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.943080 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:07.140848 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.639749 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.266538 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.268527 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.030591 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:10.043498 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:10.043571 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:10.076835 1147424 cri.go:89] found id: ""
	I0731 21:31:10.076875 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.076887 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:10.076897 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:10.076966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:10.111342 1147424 cri.go:89] found id: ""
	I0731 21:31:10.111384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.111396 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:10.111404 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:10.111473 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:10.146858 1147424 cri.go:89] found id: ""
	I0731 21:31:10.146896 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.146911 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:10.146920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:10.146989 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:10.180682 1147424 cri.go:89] found id: ""
	I0731 21:31:10.180717 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.180729 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:10.180738 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:10.180804 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:10.215147 1147424 cri.go:89] found id: ""
	I0731 21:31:10.215177 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.215186 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:10.215192 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:10.215249 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:10.248291 1147424 cri.go:89] found id: ""
	I0731 21:31:10.248327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.248336 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:10.248343 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:10.248398 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:10.284207 1147424 cri.go:89] found id: ""
	I0731 21:31:10.284241 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.284252 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:10.284259 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:10.284325 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:10.318286 1147424 cri.go:89] found id: ""
	I0731 21:31:10.318322 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.318331 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:10.318342 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:10.318356 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:10.368429 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:10.368476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:10.383638 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:10.383673 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:10.450696 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:10.450720 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:10.450742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:10.530413 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:10.530458 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.084947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:13.098074 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:13.098156 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:13.132915 1147424 cri.go:89] found id: ""
	I0731 21:31:13.132952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.132962 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:13.132968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:13.133037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:13.173568 1147424 cri.go:89] found id: ""
	I0731 21:31:13.173597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.173605 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:13.173612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:13.173668 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:13.207356 1147424 cri.go:89] found id: ""
	I0731 21:31:13.207388 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.207402 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:13.207411 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:13.207478 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:13.243452 1147424 cri.go:89] found id: ""
	I0731 21:31:13.243482 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.243493 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:13.243502 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:13.243587 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:13.278682 1147424 cri.go:89] found id: ""
	I0731 21:31:13.278719 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.278729 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:13.278736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:13.278794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:13.312698 1147424 cri.go:89] found id: ""
	I0731 21:31:13.312727 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.312735 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:13.312742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:13.312796 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:13.346223 1147424 cri.go:89] found id: ""
	I0731 21:31:13.346259 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.346270 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:13.346279 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:13.346350 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:13.380778 1147424 cri.go:89] found id: ""
	I0731 21:31:13.380819 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.380833 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:13.380847 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:13.380889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:13.394337 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:13.394372 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:13.472260 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:13.472290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:13.472308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:13.549561 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:13.549608 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.589373 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:13.589416 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:11.943150 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.443284 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.140029 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.641142 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.765639 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.265180 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.265765 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:16.143472 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:16.155966 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:16.156039 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:16.194187 1147424 cri.go:89] found id: ""
	I0731 21:31:16.194216 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.194224 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:16.194231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:16.194299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:16.228700 1147424 cri.go:89] found id: ""
	I0731 21:31:16.228738 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.228751 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:16.228760 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:16.228844 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:16.261597 1147424 cri.go:89] found id: ""
	I0731 21:31:16.261629 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.261640 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:16.261647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:16.261716 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:16.299664 1147424 cri.go:89] found id: ""
	I0731 21:31:16.299697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.299709 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:16.299718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:16.299780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:16.350144 1147424 cri.go:89] found id: ""
	I0731 21:31:16.350172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.350181 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:16.350188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:16.350254 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:16.385259 1147424 cri.go:89] found id: ""
	I0731 21:31:16.385294 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.385303 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:16.385310 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:16.385364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:16.419555 1147424 cri.go:89] found id: ""
	I0731 21:31:16.419597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.419610 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:16.419619 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:16.419714 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:16.455956 1147424 cri.go:89] found id: ""
	I0731 21:31:16.455993 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.456005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:16.456029 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:16.456048 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:16.493234 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:16.493269 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:16.544931 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:16.544975 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:16.559513 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:16.559553 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:16.625127 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.625158 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:16.625176 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.200306 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:19.213303 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:19.213393 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:19.247139 1147424 cri.go:89] found id: ""
	I0731 21:31:19.247171 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.247179 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:19.247186 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:19.247245 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:19.282630 1147424 cri.go:89] found id: ""
	I0731 21:31:19.282659 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.282668 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:19.282674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:19.282740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:19.317287 1147424 cri.go:89] found id: ""
	I0731 21:31:19.317327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.317338 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:19.317345 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:19.317410 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:19.352680 1147424 cri.go:89] found id: ""
	I0731 21:31:19.352718 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.352738 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:19.352747 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:19.352820 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:19.385653 1147424 cri.go:89] found id: ""
	I0731 21:31:19.385697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.385709 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:19.385718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:19.385794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:19.425552 1147424 cri.go:89] found id: ""
	I0731 21:31:19.425582 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.425591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:19.425598 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:19.425654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:19.461717 1147424 cri.go:89] found id: ""
	I0731 21:31:19.461753 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.461766 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:19.461775 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:19.461852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:19.497504 1147424 cri.go:89] found id: ""
	I0731 21:31:19.497542 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.497554 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:19.497567 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:19.497592 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.571818 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:19.571867 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:19.611053 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:19.611091 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:19.662174 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:19.662220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:19.676489 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:19.676526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:19.750718 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.943653 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.443833 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.140073 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.639048 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.639213 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.764897 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.765013 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:22.251175 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:22.265094 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:22.265186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:22.298628 1147424 cri.go:89] found id: ""
	I0731 21:31:22.298665 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.298676 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:22.298684 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:22.298754 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:22.336851 1147424 cri.go:89] found id: ""
	I0731 21:31:22.336888 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.336900 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:22.336909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:22.336982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:22.373362 1147424 cri.go:89] found id: ""
	I0731 21:31:22.373397 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.373409 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:22.373417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:22.373498 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:22.409578 1147424 cri.go:89] found id: ""
	I0731 21:31:22.409606 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.409614 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:22.409621 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:22.409675 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:22.446427 1147424 cri.go:89] found id: ""
	I0731 21:31:22.446458 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.446469 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:22.446477 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:22.446547 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:22.480629 1147424 cri.go:89] found id: ""
	I0731 21:31:22.480679 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.480691 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:22.480700 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:22.480769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:22.515017 1147424 cri.go:89] found id: ""
	I0731 21:31:22.515058 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.515070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:22.515078 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:22.515151 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:22.552433 1147424 cri.go:89] found id: ""
	I0731 21:31:22.552462 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.552470 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:22.552480 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:22.552493 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:22.567822 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:22.567862 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:22.640554 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:22.640585 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:22.640603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:22.732714 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:22.732776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:22.790478 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:22.790515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:21.941836 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.945561 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.639434 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.640934 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.765376 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.264346 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.352413 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:25.364739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:25.364828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:25.398119 1147424 cri.go:89] found id: ""
	I0731 21:31:25.398158 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.398171 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:25.398184 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:25.398255 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:25.432874 1147424 cri.go:89] found id: ""
	I0731 21:31:25.432908 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.432919 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:25.432928 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:25.432986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:25.467669 1147424 cri.go:89] found id: ""
	I0731 21:31:25.467702 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.467711 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:25.467717 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:25.467783 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:25.502331 1147424 cri.go:89] found id: ""
	I0731 21:31:25.502364 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.502373 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:25.502379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:25.502434 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:25.535888 1147424 cri.go:89] found id: ""
	I0731 21:31:25.535917 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.535924 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:25.535931 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:25.535990 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:25.568398 1147424 cri.go:89] found id: ""
	I0731 21:31:25.568427 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.568443 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:25.568451 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:25.568554 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:25.602724 1147424 cri.go:89] found id: ""
	I0731 21:31:25.602751 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.602759 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:25.602766 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:25.602825 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:25.635990 1147424 cri.go:89] found id: ""
	I0731 21:31:25.636021 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.636032 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:25.636045 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:25.636063 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:25.687984 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:25.688030 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:25.702979 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:25.703010 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:25.768470 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:25.768498 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:25.768519 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:25.845432 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:25.845481 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.383725 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:28.397046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:28.397130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:28.436675 1147424 cri.go:89] found id: ""
	I0731 21:31:28.436707 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.436716 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:28.436723 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:28.436780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:28.474084 1147424 cri.go:89] found id: ""
	I0731 21:31:28.474114 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.474122 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:28.474129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:28.474186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:28.512448 1147424 cri.go:89] found id: ""
	I0731 21:31:28.512485 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.512496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:28.512505 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:28.512575 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:28.557548 1147424 cri.go:89] found id: ""
	I0731 21:31:28.557579 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.557591 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:28.557599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:28.557664 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:28.600492 1147424 cri.go:89] found id: ""
	I0731 21:31:28.600526 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.600545 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:28.600553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:28.600628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:28.645067 1147424 cri.go:89] found id: ""
	I0731 21:31:28.645093 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.645101 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:28.645107 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:28.645171 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:28.678391 1147424 cri.go:89] found id: ""
	I0731 21:31:28.678431 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.678444 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:28.678452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:28.678522 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:28.712230 1147424 cri.go:89] found id: ""
	I0731 21:31:28.712260 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.712268 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:28.712278 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:28.712297 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:28.779362 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:28.779389 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:28.779403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:28.861192 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:28.861243 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.900747 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:28.900781 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:28.953135 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:28.953183 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:26.442998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.443518 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.943322 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.139072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.638724 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.264991 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.764482 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:31.467806 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:31.481274 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:31.481345 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:31.516704 1147424 cri.go:89] found id: ""
	I0731 21:31:31.516741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.516754 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:31.516765 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:31.516824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:31.553299 1147424 cri.go:89] found id: ""
	I0731 21:31:31.553332 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.553341 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:31.553348 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:31.553402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:31.587834 1147424 cri.go:89] found id: ""
	I0731 21:31:31.587864 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.587874 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:31.587881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:31.587939 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:31.623164 1147424 cri.go:89] found id: ""
	I0731 21:31:31.623194 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.623203 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:31.623209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:31.623265 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:31.659118 1147424 cri.go:89] found id: ""
	I0731 21:31:31.659151 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.659158 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:31.659165 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:31.659219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:31.697260 1147424 cri.go:89] found id: ""
	I0731 21:31:31.697297 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.697308 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:31.697317 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:31.697375 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:31.732585 1147424 cri.go:89] found id: ""
	I0731 21:31:31.732623 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.732635 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:31.732644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:31.732698 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:31.770922 1147424 cri.go:89] found id: ""
	I0731 21:31:31.770952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.770964 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:31.770976 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:31.770992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:31.823747 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:31.823805 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:31.837367 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:31.837406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:31.912937 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:31.912958 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:31.912972 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:31.991008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:31.991061 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.528933 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:34.552722 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:34.552807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:34.587277 1147424 cri.go:89] found id: ""
	I0731 21:31:34.587315 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.587326 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:34.587337 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:34.587417 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:34.619919 1147424 cri.go:89] found id: ""
	I0731 21:31:34.619952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.619961 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:34.619968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:34.620033 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:34.654967 1147424 cri.go:89] found id: ""
	I0731 21:31:34.655000 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.655007 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:34.655014 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:34.655066 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:34.689092 1147424 cri.go:89] found id: ""
	I0731 21:31:34.689128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.689139 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:34.689147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:34.689217 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:34.725112 1147424 cri.go:89] found id: ""
	I0731 21:31:34.725145 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.725153 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:34.725159 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:34.725215 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:34.760377 1147424 cri.go:89] found id: ""
	I0731 21:31:34.760411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.760422 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:34.760430 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:34.760500 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:34.796413 1147424 cri.go:89] found id: ""
	I0731 21:31:34.796445 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.796460 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:34.796468 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:34.796540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:34.833243 1147424 cri.go:89] found id: ""
	I0731 21:31:34.833277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.833288 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:34.833309 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:34.833328 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:32.943881 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:35.442928 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.638850 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.640521 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.766140 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.264336 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.268433 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.911486 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:34.911552 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.952167 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:34.952200 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:35.010995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:35.011041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:35.025756 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:35.025795 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:35.110465 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:37.610914 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:37.623848 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:37.623935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:37.660355 1147424 cri.go:89] found id: ""
	I0731 21:31:37.660384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.660392 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:37.660398 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:37.660456 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:37.694935 1147424 cri.go:89] found id: ""
	I0731 21:31:37.694966 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.694975 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:37.694982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:37.695048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:37.729438 1147424 cri.go:89] found id: ""
	I0731 21:31:37.729472 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.729485 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:37.729493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:37.729570 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:37.766412 1147424 cri.go:89] found id: ""
	I0731 21:31:37.766440 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.766449 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:37.766457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:37.766519 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:37.803830 1147424 cri.go:89] found id: ""
	I0731 21:31:37.803865 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.803875 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:37.803884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:37.803956 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:37.838698 1147424 cri.go:89] found id: ""
	I0731 21:31:37.838730 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.838741 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:37.838749 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:37.838819 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:37.873274 1147424 cri.go:89] found id: ""
	I0731 21:31:37.873312 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.873324 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:37.873332 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:37.873404 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:37.907801 1147424 cri.go:89] found id: ""
	I0731 21:31:37.907835 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.907859 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:37.907870 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:37.907893 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:37.962192 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:37.962233 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:37.976530 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:37.976577 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:38.048551 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:38.048584 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:38.048603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:38.122957 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:38.123003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:37.942944 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.442336 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.139834 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.141085 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.640176 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.766169 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:43.767226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.663623 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:40.677119 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:40.677184 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:40.710893 1147424 cri.go:89] found id: ""
	I0731 21:31:40.710923 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.710932 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:40.710939 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:40.710996 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:40.746166 1147424 cri.go:89] found id: ""
	I0731 21:31:40.746203 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.746216 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:40.746223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:40.746296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:40.789323 1147424 cri.go:89] found id: ""
	I0731 21:31:40.789353 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.789362 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:40.789368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:40.789433 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:40.826731 1147424 cri.go:89] found id: ""
	I0731 21:31:40.826766 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.826775 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:40.826782 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:40.826843 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:40.865533 1147424 cri.go:89] found id: ""
	I0731 21:31:40.865562 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.865570 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:40.865576 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:40.865628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:40.900523 1147424 cri.go:89] found id: ""
	I0731 21:31:40.900555 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.900564 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:40.900571 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:40.900628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:40.934140 1147424 cri.go:89] found id: ""
	I0731 21:31:40.934172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.934181 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:40.934187 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:40.934252 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:40.969989 1147424 cri.go:89] found id: ""
	I0731 21:31:40.970033 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.970045 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:40.970058 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:40.970076 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:41.021416 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:41.021464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:41.035947 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:41.035978 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:41.102101 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:41.102126 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:41.102141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:41.182412 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:41.182457 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:43.727586 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:43.740633 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:43.740725 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:43.775305 1147424 cri.go:89] found id: ""
	I0731 21:31:43.775343 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.775354 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:43.775363 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:43.775426 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:43.813410 1147424 cri.go:89] found id: ""
	I0731 21:31:43.813441 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.813449 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:43.813455 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:43.813510 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:43.848924 1147424 cri.go:89] found id: ""
	I0731 21:31:43.848959 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.848971 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:43.848979 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:43.849048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:43.884911 1147424 cri.go:89] found id: ""
	I0731 21:31:43.884950 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.884962 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:43.884971 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:43.885041 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:43.918244 1147424 cri.go:89] found id: ""
	I0731 21:31:43.918277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.918286 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:43.918292 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:43.918348 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:43.952166 1147424 cri.go:89] found id: ""
	I0731 21:31:43.952200 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.952211 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:43.952220 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:43.952299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:43.985756 1147424 cri.go:89] found id: ""
	I0731 21:31:43.985790 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.985850 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:43.985863 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:43.985916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:44.020480 1147424 cri.go:89] found id: ""
	I0731 21:31:44.020516 1147424 logs.go:276] 0 containers: []
	W0731 21:31:44.020528 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:44.020542 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:44.020560 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:44.058344 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:44.058398 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:44.110703 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:44.110751 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:44.124735 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:44.124771 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:44.193412 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:44.193445 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:44.193463 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:42.442910 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.443829 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.140083 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.640177 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.265466 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.265667 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.775651 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:46.789288 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:46.789384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.822997 1147424 cri.go:89] found id: ""
	I0731 21:31:46.823032 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.823044 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:46.823053 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:46.823123 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:46.857000 1147424 cri.go:89] found id: ""
	I0731 21:31:46.857030 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.857039 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:46.857046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:46.857112 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:46.890362 1147424 cri.go:89] found id: ""
	I0731 21:31:46.890392 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.890404 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:46.890417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:46.890483 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:46.922819 1147424 cri.go:89] found id: ""
	I0731 21:31:46.922848 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.922864 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:46.922871 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:46.922935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:46.957333 1147424 cri.go:89] found id: ""
	I0731 21:31:46.957363 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.957371 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:46.957376 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:46.957444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:46.990795 1147424 cri.go:89] found id: ""
	I0731 21:31:46.990830 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.990840 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:46.990849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:46.990922 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:47.025144 1147424 cri.go:89] found id: ""
	I0731 21:31:47.025174 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.025185 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:47.025194 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:47.025263 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:47.062624 1147424 cri.go:89] found id: ""
	I0731 21:31:47.062658 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.062667 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:47.062677 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:47.062691 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:47.112698 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:47.112742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:47.127240 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:47.127276 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:47.195034 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:47.195062 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:47.195081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:47.277532 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:47.277574 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:49.814610 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:49.828213 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:49.828291 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.944364 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.442118 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.640243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.640580 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.764302 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:52.764441 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.861951 1147424 cri.go:89] found id: ""
	I0731 21:31:49.861982 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.861991 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:49.861998 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:49.862054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:49.898601 1147424 cri.go:89] found id: ""
	I0731 21:31:49.898630 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.898638 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:49.898644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:49.898711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:49.933615 1147424 cri.go:89] found id: ""
	I0731 21:31:49.933652 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.933665 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:49.933673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:49.933742 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:49.970356 1147424 cri.go:89] found id: ""
	I0731 21:31:49.970395 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.970416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:49.970425 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:49.970503 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:50.004186 1147424 cri.go:89] found id: ""
	I0731 21:31:50.004220 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.004232 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:50.004241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:50.004316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:50.037701 1147424 cri.go:89] found id: ""
	I0731 21:31:50.037741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.037753 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:50.037761 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:50.037834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:50.074358 1147424 cri.go:89] found id: ""
	I0731 21:31:50.074390 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.074399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:50.074409 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:50.074474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:50.109052 1147424 cri.go:89] found id: ""
	I0731 21:31:50.109083 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.109091 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:50.109101 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:50.109116 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:50.167891 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:50.167935 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:50.181132 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:50.181179 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:50.247835 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:50.247865 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:50.247882 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:50.328733 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:50.328779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:52.867344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:52.880275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:52.880355 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:52.913980 1147424 cri.go:89] found id: ""
	I0731 21:31:52.914015 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.914024 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:52.914030 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:52.914095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:52.947833 1147424 cri.go:89] found id: ""
	I0731 21:31:52.947866 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.947874 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:52.947880 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:52.947947 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:52.981345 1147424 cri.go:89] found id: ""
	I0731 21:31:52.981380 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.981393 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:52.981401 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:52.981470 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:53.016253 1147424 cri.go:89] found id: ""
	I0731 21:31:53.016283 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.016292 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:53.016299 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:53.016351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:53.049683 1147424 cri.go:89] found id: ""
	I0731 21:31:53.049716 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.049726 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:53.049734 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:53.049807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:53.082171 1147424 cri.go:89] found id: ""
	I0731 21:31:53.082217 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.082228 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:53.082237 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:53.082308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:53.114595 1147424 cri.go:89] found id: ""
	I0731 21:31:53.114640 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.114658 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:53.114667 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:53.114739 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:53.151612 1147424 cri.go:89] found id: ""
	I0731 21:31:53.151644 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.151672 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:53.151686 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:53.151702 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:53.203251 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:53.203293 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:53.219234 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:53.219272 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:53.290273 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:53.290292 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:53.290306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:53.367967 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:53.368023 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:51.443058 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.943272 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.141370 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.638859 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.264069 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.265286 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.909173 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:55.922278 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:55.922351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:55.959354 1147424 cri.go:89] found id: ""
	I0731 21:31:55.959389 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.959397 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:55.959403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:55.959467 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:55.998507 1147424 cri.go:89] found id: ""
	I0731 21:31:55.998544 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.998557 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:55.998566 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:55.998638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:56.034763 1147424 cri.go:89] found id: ""
	I0731 21:31:56.034811 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.034824 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:56.034833 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:56.034914 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:56.068685 1147424 cri.go:89] found id: ""
	I0731 21:31:56.068726 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.068737 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:56.068746 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:56.068833 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:56.105785 1147424 cri.go:89] found id: ""
	I0731 21:31:56.105824 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.105837 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:56.105845 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:56.105920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:56.142701 1147424 cri.go:89] found id: ""
	I0731 21:31:56.142732 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.142744 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:56.142752 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:56.142834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:56.177016 1147424 cri.go:89] found id: ""
	I0731 21:31:56.177064 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.177077 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:56.177089 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:56.177163 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:56.211989 1147424 cri.go:89] found id: ""
	I0731 21:31:56.212026 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.212038 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:56.212052 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:56.212070 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:56.263995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:56.264045 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:56.277535 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:56.277570 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:56.343150 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:56.343179 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:56.343199 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:56.425361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:56.425406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:58.965276 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:58.978115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:58.978190 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:59.011793 1147424 cri.go:89] found id: ""
	I0731 21:31:59.011829 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.011840 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:59.011849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:59.011921 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:59.048117 1147424 cri.go:89] found id: ""
	I0731 21:31:59.048153 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.048164 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:59.048172 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:59.048240 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:59.081955 1147424 cri.go:89] found id: ""
	I0731 21:31:59.081985 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.081996 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:59.082004 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:59.082072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:59.116269 1147424 cri.go:89] found id: ""
	I0731 21:31:59.116308 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.116321 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:59.116330 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:59.116396 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:59.152551 1147424 cri.go:89] found id: ""
	I0731 21:31:59.152580 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.152592 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:59.152599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:59.152669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:59.186708 1147424 cri.go:89] found id: ""
	I0731 21:31:59.186749 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.186758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:59.186764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:59.186830 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:59.223628 1147424 cri.go:89] found id: ""
	I0731 21:31:59.223681 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.223690 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:59.223698 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:59.223773 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:59.256867 1147424 cri.go:89] found id: ""
	I0731 21:31:59.256901 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.256913 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:59.256925 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:59.256944 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:59.307167 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:59.307209 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:59.320958 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:59.320992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:59.390776 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:59.390798 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:59.390813 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:59.467482 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:59.467534 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:56.445461 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:58.943434 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.639271 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:00.139778 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:59.764344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:01.765157 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:04.264512 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.005084 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:02.017546 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:02.017635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:02.053094 1147424 cri.go:89] found id: ""
	I0731 21:32:02.053135 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.053146 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:02.053155 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:02.053212 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:02.087483 1147424 cri.go:89] found id: ""
	I0731 21:32:02.087517 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.087535 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:02.087543 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:02.087600 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:02.123647 1147424 cri.go:89] found id: ""
	I0731 21:32:02.123685 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.123696 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:02.123706 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:02.123764 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:02.157798 1147424 cri.go:89] found id: ""
	I0731 21:32:02.157828 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.157837 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:02.157843 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:02.157899 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:02.190266 1147424 cri.go:89] found id: ""
	I0731 21:32:02.190297 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.190309 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:02.190318 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:02.190377 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:02.232507 1147424 cri.go:89] found id: ""
	I0731 21:32:02.232537 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.232546 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:02.232552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:02.232605 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:02.270105 1147424 cri.go:89] found id: ""
	I0731 21:32:02.270133 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.270144 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:02.270152 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:02.270221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:02.304599 1147424 cri.go:89] found id: ""
	I0731 21:32:02.304631 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.304642 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:02.304654 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:02.304671 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:02.356686 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:02.356727 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:02.370114 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:02.370147 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:02.437753 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:02.437778 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:02.437797 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:02.518085 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:02.518131 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:01.443142 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:03.943209 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.640855 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.141191 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:06.265050 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.265389 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.071289 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:05.084496 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:05.084579 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:05.124178 1147424 cri.go:89] found id: ""
	I0731 21:32:05.124208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.124218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:05.124224 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:05.124279 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:05.162119 1147424 cri.go:89] found id: ""
	I0731 21:32:05.162155 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.162167 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:05.162173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:05.162237 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:05.198445 1147424 cri.go:89] found id: ""
	I0731 21:32:05.198483 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.198496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:05.198504 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:05.198615 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:05.240678 1147424 cri.go:89] found id: ""
	I0731 21:32:05.240702 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.240711 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:05.240718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:05.240770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:05.276910 1147424 cri.go:89] found id: ""
	I0731 21:32:05.276942 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.276965 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:05.276974 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:05.277051 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:05.310130 1147424 cri.go:89] found id: ""
	I0731 21:32:05.310158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.310166 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:05.310173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:05.310227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:05.345144 1147424 cri.go:89] found id: ""
	I0731 21:32:05.345179 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.345191 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:05.345199 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:05.345267 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:05.386723 1147424 cri.go:89] found id: ""
	I0731 21:32:05.386766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.386778 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:05.386792 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:05.386809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:05.425852 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:05.425887 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:05.482401 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:05.482447 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:05.495888 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:05.495918 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:05.562121 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:05.562153 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:05.562174 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.140837 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:08.153503 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:08.153585 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:08.187113 1147424 cri.go:89] found id: ""
	I0731 21:32:08.187143 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.187155 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:08.187164 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:08.187226 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:08.219853 1147424 cri.go:89] found id: ""
	I0731 21:32:08.219888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.219898 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:08.219906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:08.219976 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:08.253817 1147424 cri.go:89] found id: ""
	I0731 21:32:08.253848 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.253857 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:08.253864 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:08.253930 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:08.307069 1147424 cri.go:89] found id: ""
	I0731 21:32:08.307096 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.307104 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:08.307111 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:08.307176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:08.349604 1147424 cri.go:89] found id: ""
	I0731 21:32:08.349632 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.349641 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:08.349648 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:08.349711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:08.382966 1147424 cri.go:89] found id: ""
	I0731 21:32:08.383000 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.383013 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:08.383022 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:08.383080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:08.416904 1147424 cri.go:89] found id: ""
	I0731 21:32:08.416938 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.416950 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:08.416958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:08.417021 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:08.451024 1147424 cri.go:89] found id: ""
	I0731 21:32:08.451061 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.451074 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:08.451087 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:08.451103 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.530394 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:08.530441 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:08.567554 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:08.567583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:08.616162 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:08.616208 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:08.629228 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:08.629264 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:08.700820 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:06.441762 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.443004 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.942870 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:07.638970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.139278 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.764866 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:13.265303 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:11.201091 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:11.213847 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:11.213920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:11.248925 1147424 cri.go:89] found id: ""
	I0731 21:32:11.248963 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.248974 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:11.248982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:11.249054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:11.286134 1147424 cri.go:89] found id: ""
	I0731 21:32:11.286168 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.286185 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:11.286193 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:11.286261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:11.321493 1147424 cri.go:89] found id: ""
	I0731 21:32:11.321524 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.321534 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:11.321542 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:11.321610 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:11.356679 1147424 cri.go:89] found id: ""
	I0731 21:32:11.356708 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.356724 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:11.356731 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:11.356788 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:11.390757 1147424 cri.go:89] found id: ""
	I0731 21:32:11.390785 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.390795 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:11.390802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:11.390868 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:11.424687 1147424 cri.go:89] found id: ""
	I0731 21:32:11.424724 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.424736 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:11.424745 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:11.424816 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:11.458542 1147424 cri.go:89] found id: ""
	I0731 21:32:11.458579 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.458590 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:11.458599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:11.458678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:11.490956 1147424 cri.go:89] found id: ""
	I0731 21:32:11.490999 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.491009 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:11.491020 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:11.491036 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:11.541013 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:11.541057 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:11.554729 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:11.554760 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:11.619828 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:11.619868 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:11.619894 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:11.697785 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:11.697837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.235153 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:14.247701 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:14.247770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:14.282802 1147424 cri.go:89] found id: ""
	I0731 21:32:14.282835 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.282846 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:14.282854 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:14.282926 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:14.316106 1147424 cri.go:89] found id: ""
	I0731 21:32:14.316158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.316168 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:14.316175 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:14.316235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:14.349319 1147424 cri.go:89] found id: ""
	I0731 21:32:14.349358 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.349370 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:14.349379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:14.349446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:14.385630 1147424 cri.go:89] found id: ""
	I0731 21:32:14.385665 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.385674 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:14.385681 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:14.385745 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:14.422054 1147424 cri.go:89] found id: ""
	I0731 21:32:14.422090 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.422104 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:14.422113 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:14.422176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:14.456170 1147424 cri.go:89] found id: ""
	I0731 21:32:14.456207 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.456216 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:14.456223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:14.456283 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:14.489571 1147424 cri.go:89] found id: ""
	I0731 21:32:14.489611 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.489622 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:14.489632 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:14.489709 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:14.524764 1147424 cri.go:89] found id: ""
	I0731 21:32:14.524803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.524814 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:14.524827 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:14.524843 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:14.598487 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:14.598511 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:14.598526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:14.675912 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:14.675954 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.722740 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:14.722778 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:14.780558 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:14.780604 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:13.441757 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.442992 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:12.140024 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:14.638468 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:16.639109 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.764963 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.265010 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:17.300221 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:17.313242 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:17.313309 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:17.349244 1147424 cri.go:89] found id: ""
	I0731 21:32:17.349276 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.349284 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:17.349293 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:17.349364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:17.382158 1147424 cri.go:89] found id: ""
	I0731 21:32:17.382188 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.382196 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:17.382203 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:17.382276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:17.416250 1147424 cri.go:89] found id: ""
	I0731 21:32:17.416283 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.416295 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:17.416304 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:17.416363 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:17.449192 1147424 cri.go:89] found id: ""
	I0731 21:32:17.449229 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.449240 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:17.449249 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:17.449316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:17.482189 1147424 cri.go:89] found id: ""
	I0731 21:32:17.482223 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.482235 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:17.482244 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:17.482308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:17.516284 1147424 cri.go:89] found id: ""
	I0731 21:32:17.516312 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.516320 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:17.516327 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:17.516380 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:17.550025 1147424 cri.go:89] found id: ""
	I0731 21:32:17.550059 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.550070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:17.550077 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:17.550142 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:17.582378 1147424 cri.go:89] found id: ""
	I0731 21:32:17.582411 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.582424 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:17.582488 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:17.582513 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:17.635593 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:17.635640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:17.649694 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:17.649734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:17.716275 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:17.716301 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:17.716316 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:17.800261 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:17.800327 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:17.942859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:19.943179 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.639313 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.639947 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.265670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:22.764461 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.339222 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:20.353494 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:20.353574 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:20.387397 1147424 cri.go:89] found id: ""
	I0731 21:32:20.387432 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.387441 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:20.387449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:20.387534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:20.421038 1147424 cri.go:89] found id: ""
	I0731 21:32:20.421074 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.421082 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:20.421088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:20.421200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:20.461171 1147424 cri.go:89] found id: ""
	I0731 21:32:20.461208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.461221 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:20.461229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:20.461297 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:20.529655 1147424 cri.go:89] found id: ""
	I0731 21:32:20.529692 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.529704 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:20.529712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:20.529779 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:20.584293 1147424 cri.go:89] found id: ""
	I0731 21:32:20.584327 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.584337 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:20.584344 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:20.584399 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:20.617177 1147424 cri.go:89] found id: ""
	I0731 21:32:20.617209 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.617220 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:20.617226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:20.617282 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:20.657058 1147424 cri.go:89] found id: ""
	I0731 21:32:20.657094 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.657104 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:20.657112 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:20.657181 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:20.689987 1147424 cri.go:89] found id: ""
	I0731 21:32:20.690016 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.690026 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:20.690038 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:20.690058 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:20.702274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:20.702310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:20.766054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:20.766088 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:20.766106 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:20.850776 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:20.850823 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:20.888735 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:20.888766 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.440658 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:23.453529 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:23.453616 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:23.487210 1147424 cri.go:89] found id: ""
	I0731 21:32:23.487249 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.487263 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:23.487271 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:23.487338 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:23.520656 1147424 cri.go:89] found id: ""
	I0731 21:32:23.520697 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.520709 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:23.520718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:23.520794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:23.557952 1147424 cri.go:89] found id: ""
	I0731 21:32:23.557982 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.557991 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:23.557999 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:23.558052 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:23.591428 1147424 cri.go:89] found id: ""
	I0731 21:32:23.591458 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.591466 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:23.591473 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:23.591537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:23.624978 1147424 cri.go:89] found id: ""
	I0731 21:32:23.625009 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.625019 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:23.625026 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:23.625080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:23.659424 1147424 cri.go:89] found id: ""
	I0731 21:32:23.659460 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.659473 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:23.659482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:23.659557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:23.696695 1147424 cri.go:89] found id: ""
	I0731 21:32:23.696733 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.696745 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:23.696753 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:23.696818 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:23.734067 1147424 cri.go:89] found id: ""
	I0731 21:32:23.734097 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.734106 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:23.734116 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:23.734130 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.787432 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:23.787476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:23.801116 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:23.801154 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:23.867801 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:23.867840 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:23.867859 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:23.952393 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:23.952435 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:22.442859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:24.943043 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:23.139590 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.140770 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.264790 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.763670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:26.490759 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:26.503050 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:26.503120 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:26.536191 1147424 cri.go:89] found id: ""
	I0731 21:32:26.536239 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.536251 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:26.536260 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:26.536330 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:26.571038 1147424 cri.go:89] found id: ""
	I0731 21:32:26.571075 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.571088 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:26.571096 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:26.571164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:26.605295 1147424 cri.go:89] found id: ""
	I0731 21:32:26.605333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.605346 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:26.605355 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:26.605422 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:26.644430 1147424 cri.go:89] found id: ""
	I0731 21:32:26.644472 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.644482 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:26.644489 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:26.644553 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:26.675985 1147424 cri.go:89] found id: ""
	I0731 21:32:26.676020 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.676033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:26.676041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:26.676128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:26.707738 1147424 cri.go:89] found id: ""
	I0731 21:32:26.707766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.707780 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:26.707787 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:26.707850 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:26.743969 1147424 cri.go:89] found id: ""
	I0731 21:32:26.743998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.744007 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:26.744013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:26.744067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:26.782301 1147424 cri.go:89] found id: ""
	I0731 21:32:26.782333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.782346 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:26.782361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:26.782377 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:26.818548 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:26.818580 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.870586 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:26.870632 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:26.883944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:26.883983 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:26.951603 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:26.951630 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:26.951648 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:29.527796 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:29.540627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:29.540862 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:29.575513 1147424 cri.go:89] found id: ""
	I0731 21:32:29.575544 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.575553 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:29.575559 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:29.575627 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:29.607395 1147424 cri.go:89] found id: ""
	I0731 21:32:29.607425 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.607434 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:29.607440 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:29.607505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:29.641509 1147424 cri.go:89] found id: ""
	I0731 21:32:29.641539 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.641548 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:29.641553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:29.641604 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:29.673166 1147424 cri.go:89] found id: ""
	I0731 21:32:29.673197 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.673207 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:29.673215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:29.673285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:29.703698 1147424 cri.go:89] found id: ""
	I0731 21:32:29.703744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.703752 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:29.703759 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:29.703821 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:29.738704 1147424 cri.go:89] found id: ""
	I0731 21:32:29.738746 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.738758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:29.738767 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:29.738858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:29.771359 1147424 cri.go:89] found id: ""
	I0731 21:32:29.771388 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.771399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:29.771407 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:29.771474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:29.806579 1147424 cri.go:89] found id: ""
	I0731 21:32:29.806614 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.806625 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:29.806635 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:29.806649 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.943079 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.442599 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.638623 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.639949 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.764393 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:31.764649 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.764888 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.857957 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:29.857994 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:29.871348 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:29.871387 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:29.942833 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:29.942864 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:29.942880 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:30.027254 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:30.027306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:32.565077 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:32.577796 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:32.577878 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:32.611725 1147424 cri.go:89] found id: ""
	I0731 21:32:32.611762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.611774 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:32.611783 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:32.611859 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:32.647901 1147424 cri.go:89] found id: ""
	I0731 21:32:32.647939 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.647951 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:32.647959 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:32.648018 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:32.681042 1147424 cri.go:89] found id: ""
	I0731 21:32:32.681073 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.681084 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:32.681091 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:32.681162 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:32.716141 1147424 cri.go:89] found id: ""
	I0731 21:32:32.716173 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.716182 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:32.716188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:32.716242 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:32.753207 1147424 cri.go:89] found id: ""
	I0731 21:32:32.753236 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.753244 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:32.753250 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:32.753301 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:32.787591 1147424 cri.go:89] found id: ""
	I0731 21:32:32.787619 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.787628 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:32.787635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:32.787717 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:32.822430 1147424 cri.go:89] found id: ""
	I0731 21:32:32.822464 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.822476 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:32.822484 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:32.822544 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:32.854566 1147424 cri.go:89] found id: ""
	I0731 21:32:32.854600 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.854609 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:32.854621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:32.854636 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:32.905256 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:32.905310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:32.918575 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:32.918607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:32.981644 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:32.981669 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:32.981685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:33.062767 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:33.062814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:31.443380 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.942793 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.943502 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:32.139483 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:34.140185 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.638720 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.264481 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.265008 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.599598 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:35.612328 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:35.612403 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:35.647395 1147424 cri.go:89] found id: ""
	I0731 21:32:35.647428 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.647439 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:35.647448 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:35.647514 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:35.682339 1147424 cri.go:89] found id: ""
	I0731 21:32:35.682370 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.682378 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:35.682384 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:35.682440 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:35.721727 1147424 cri.go:89] found id: ""
	I0731 21:32:35.721762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.721775 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:35.721784 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:35.721866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:35.754648 1147424 cri.go:89] found id: ""
	I0731 21:32:35.754678 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.754688 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:35.754697 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:35.754761 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:35.787880 1147424 cri.go:89] found id: ""
	I0731 21:32:35.787910 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.787922 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:35.787930 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:35.788004 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:35.822619 1147424 cri.go:89] found id: ""
	I0731 21:32:35.822656 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.822668 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:35.822677 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:35.822743 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:35.856160 1147424 cri.go:89] found id: ""
	I0731 21:32:35.856198 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.856210 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:35.856219 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:35.856284 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:35.888842 1147424 cri.go:89] found id: ""
	I0731 21:32:35.888881 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.888893 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:35.888906 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:35.888924 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:35.956296 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:35.956323 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:35.956342 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:36.039485 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:36.039531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:36.081202 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:36.081247 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:36.130789 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:36.130831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:38.647723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:38.660334 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:38.660405 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:38.696782 1147424 cri.go:89] found id: ""
	I0731 21:32:38.696813 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.696822 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:38.696828 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:38.696887 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:38.731835 1147424 cri.go:89] found id: ""
	I0731 21:32:38.731874 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.731887 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:38.731895 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:38.731969 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:38.768894 1147424 cri.go:89] found id: ""
	I0731 21:32:38.768924 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.768935 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:38.768943 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:38.769012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:38.802331 1147424 cri.go:89] found id: ""
	I0731 21:32:38.802361 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.802370 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:38.802377 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:38.802430 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:38.835822 1147424 cri.go:89] found id: ""
	I0731 21:32:38.835852 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.835864 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:38.835881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:38.835940 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:38.869104 1147424 cri.go:89] found id: ""
	I0731 21:32:38.869141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.869153 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:38.869162 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:38.869234 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:38.907732 1147424 cri.go:89] found id: ""
	I0731 21:32:38.907769 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.907781 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:38.907789 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:38.907858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:38.942961 1147424 cri.go:89] found id: ""
	I0731 21:32:38.942994 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.943005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:38.943017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:38.943032 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:38.997537 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:38.997584 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:39.011711 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:39.011745 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:39.082834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:39.082861 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:39.082878 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:39.168702 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:39.168758 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:38.442196 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.943085 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.639586 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.140158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.764887 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.265118 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.706713 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:41.720209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:41.720298 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:41.752969 1147424 cri.go:89] found id: ""
	I0731 21:32:41.753005 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.753016 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:41.753025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:41.753095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:41.786502 1147424 cri.go:89] found id: ""
	I0731 21:32:41.786542 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.786555 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:41.786564 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:41.786635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:41.819958 1147424 cri.go:89] found id: ""
	I0731 21:32:41.819989 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.820000 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:41.820008 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:41.820073 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:41.855104 1147424 cri.go:89] found id: ""
	I0731 21:32:41.855141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.855153 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:41.855161 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:41.855228 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:41.889375 1147424 cri.go:89] found id: ""
	I0731 21:32:41.889413 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.889423 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:41.889429 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:41.889505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:41.925172 1147424 cri.go:89] found id: ""
	I0731 21:32:41.925199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.925208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:41.925215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:41.925278 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:41.960951 1147424 cri.go:89] found id: ""
	I0731 21:32:41.960995 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.961009 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:41.961017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:41.961086 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:41.996458 1147424 cri.go:89] found id: ""
	I0731 21:32:41.996493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.996506 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:41.996519 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:41.996537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:42.048841 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:42.048889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:42.062235 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:42.062271 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:42.131510 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:42.131536 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:42.131551 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:42.216993 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:42.217035 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:44.756236 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:44.769719 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:44.769800 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:44.808963 1147424 cri.go:89] found id: ""
	I0731 21:32:44.808998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.809009 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:44.809017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:44.809095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:44.843163 1147424 cri.go:89] found id: ""
	I0731 21:32:44.843199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.843212 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:44.843225 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:44.843287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:42.943536 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.443141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.140264 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.140607 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.764757 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.765226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:44.877440 1147424 cri.go:89] found id: ""
	I0731 21:32:44.877468 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.877477 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:44.877483 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:44.877537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:44.911877 1147424 cri.go:89] found id: ""
	I0731 21:32:44.911906 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.911915 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:44.911922 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:44.911974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:44.945516 1147424 cri.go:89] found id: ""
	I0731 21:32:44.945547 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.945558 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:44.945565 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:44.945634 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:44.983858 1147424 cri.go:89] found id: ""
	I0731 21:32:44.983890 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.983898 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:44.983906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:44.983981 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:45.017030 1147424 cri.go:89] found id: ""
	I0731 21:32:45.017064 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.017075 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:45.017084 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:45.017154 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:45.051005 1147424 cri.go:89] found id: ""
	I0731 21:32:45.051040 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.051053 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:45.051064 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:45.051077 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:45.100602 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:45.100646 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:45.113843 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:45.113891 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:45.187725 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:45.187760 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:45.187779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:45.273549 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:45.273588 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:47.813567 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:47.826674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:47.826762 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:47.863746 1147424 cri.go:89] found id: ""
	I0731 21:32:47.863781 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.863789 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:47.863797 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:47.863860 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:47.901125 1147424 cri.go:89] found id: ""
	I0731 21:32:47.901158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.901169 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:47.901177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:47.901247 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:47.936510 1147424 cri.go:89] found id: ""
	I0731 21:32:47.936543 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.936553 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:47.936560 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:47.936618 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:47.972712 1147424 cri.go:89] found id: ""
	I0731 21:32:47.972744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.972754 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:47.972764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:47.972828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:48.007785 1147424 cri.go:89] found id: ""
	I0731 21:32:48.007818 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.007831 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:48.007839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:48.007907 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:48.045821 1147424 cri.go:89] found id: ""
	I0731 21:32:48.045851 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.045863 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:48.045872 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:48.045945 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:48.083790 1147424 cri.go:89] found id: ""
	I0731 21:32:48.083823 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.083832 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:48.083839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:48.083903 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:48.122430 1147424 cri.go:89] found id: ""
	I0731 21:32:48.122465 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.122477 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:48.122490 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:48.122505 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:48.200081 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:48.200140 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:48.240500 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:48.240537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:48.292336 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:48.292393 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:48.305398 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:48.305431 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:48.381327 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:47.943158 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.945740 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.638897 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.640039 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.269263 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.765262 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.881554 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:50.894655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:50.894740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:50.928819 1147424 cri.go:89] found id: ""
	I0731 21:32:50.928861 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.928873 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:50.928882 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:50.928950 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:50.962856 1147424 cri.go:89] found id: ""
	I0731 21:32:50.962897 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.962908 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:50.962917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:50.962980 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:50.995765 1147424 cri.go:89] found id: ""
	I0731 21:32:50.995803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.995815 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:50.995823 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:50.995892 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:51.034418 1147424 cri.go:89] found id: ""
	I0731 21:32:51.034454 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.034467 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:51.034476 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:51.034534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:51.070687 1147424 cri.go:89] found id: ""
	I0731 21:32:51.070723 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.070732 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:51.070739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:51.070828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:51.106934 1147424 cri.go:89] found id: ""
	I0731 21:32:51.106959 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.106966 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:51.106973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:51.107026 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:51.143489 1147424 cri.go:89] found id: ""
	I0731 21:32:51.143513 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.143522 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:51.143530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:51.143591 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:51.180778 1147424 cri.go:89] found id: ""
	I0731 21:32:51.180806 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.180816 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:51.180827 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:51.180842 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:51.194695 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:51.194734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:51.262172 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:51.262200 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:51.262220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:51.344678 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:51.344719 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:51.383624 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:51.383659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:53.936339 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:53.950362 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:53.950446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:53.984346 1147424 cri.go:89] found id: ""
	I0731 21:32:53.984376 1147424 logs.go:276] 0 containers: []
	W0731 21:32:53.984391 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:53.984403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:53.984481 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:54.019937 1147424 cri.go:89] found id: ""
	I0731 21:32:54.019973 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.019986 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:54.019994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:54.020070 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:54.056068 1147424 cri.go:89] found id: ""
	I0731 21:32:54.056120 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.056133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:54.056142 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:54.056221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:54.094375 1147424 cri.go:89] found id: ""
	I0731 21:32:54.094407 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.094416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:54.094422 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:54.094486 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:54.130326 1147424 cri.go:89] found id: ""
	I0731 21:32:54.130362 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.130374 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:54.130383 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:54.130444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:54.168190 1147424 cri.go:89] found id: ""
	I0731 21:32:54.168228 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.168239 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:54.168248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:54.168329 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:54.201946 1147424 cri.go:89] found id: ""
	I0731 21:32:54.201979 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.201988 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:54.201994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:54.202055 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:54.233852 1147424 cri.go:89] found id: ""
	I0731 21:32:54.233888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.233896 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:54.233907 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:54.233922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:54.287620 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:54.287664 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:54.309984 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:54.310019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:54.382751 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:54.382774 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:54.382789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:54.460042 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:54.460105 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:52.443844 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.943970 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.140449 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.141072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:56.639439 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:55.264301 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.265478 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.002945 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:57.015673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:57.015763 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:57.049464 1147424 cri.go:89] found id: ""
	I0731 21:32:57.049493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.049502 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:57.049509 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:57.049561 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:57.083326 1147424 cri.go:89] found id: ""
	I0731 21:32:57.083356 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.083365 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:57.083371 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:57.083431 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:57.115103 1147424 cri.go:89] found id: ""
	I0731 21:32:57.115132 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.115141 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:57.115147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:57.115200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:57.153178 1147424 cri.go:89] found id: ""
	I0731 21:32:57.153214 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.153226 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:57.153234 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:57.153310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:57.187940 1147424 cri.go:89] found id: ""
	I0731 21:32:57.187980 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.187992 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:57.188001 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:57.188072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:57.221825 1147424 cri.go:89] found id: ""
	I0731 21:32:57.221858 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.221868 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:57.221884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:57.221948 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:57.255087 1147424 cri.go:89] found id: ""
	I0731 21:32:57.255115 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.255128 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:57.255137 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:57.255207 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:57.290095 1147424 cri.go:89] found id: ""
	I0731 21:32:57.290131 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.290143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:57.290157 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:57.290175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:57.343777 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:57.343819 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:57.356944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:57.356981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:57.431220 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:57.431248 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:57.431267 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:57.518079 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:57.518123 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:57.442671 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.942490 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:58.639801 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.139266 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.764738 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.765367 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:04.265447 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:00.056208 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:00.069424 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:00.069511 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:00.105855 1147424 cri.go:89] found id: ""
	I0731 21:33:00.105891 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.105902 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:00.105909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:00.105984 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:00.143079 1147424 cri.go:89] found id: ""
	I0731 21:33:00.143109 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.143120 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:00.143128 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:00.143195 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:00.178114 1147424 cri.go:89] found id: ""
	I0731 21:33:00.178150 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.178162 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:00.178171 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:00.178235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:00.212518 1147424 cri.go:89] found id: ""
	I0731 21:33:00.212547 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.212556 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:00.212562 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:00.212626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:00.246653 1147424 cri.go:89] found id: ""
	I0731 21:33:00.246683 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.246693 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:00.246702 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:00.246795 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:00.280163 1147424 cri.go:89] found id: ""
	I0731 21:33:00.280196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.280208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:00.280216 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:00.280285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:00.313593 1147424 cri.go:89] found id: ""
	I0731 21:33:00.313622 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.313631 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:00.313637 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:00.313691 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:00.347809 1147424 cri.go:89] found id: ""
	I0731 21:33:00.347838 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.347846 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:00.347858 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:00.347870 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:00.360481 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:00.360515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:00.433834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:00.433855 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:00.433869 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:00.513679 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:00.513721 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:00.551415 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:00.551466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.101928 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:03.114183 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:03.114262 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:03.152397 1147424 cri.go:89] found id: ""
	I0731 21:33:03.152427 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.152442 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:03.152449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:03.152505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:03.186595 1147424 cri.go:89] found id: ""
	I0731 21:33:03.186626 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.186640 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:03.186647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:03.186700 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:03.219085 1147424 cri.go:89] found id: ""
	I0731 21:33:03.219116 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.219126 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:03.219135 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:03.219201 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:03.251541 1147424 cri.go:89] found id: ""
	I0731 21:33:03.251573 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.251583 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:03.251592 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:03.251660 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:03.287880 1147424 cri.go:89] found id: ""
	I0731 21:33:03.287911 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.287920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:03.287927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:03.287992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:03.320317 1147424 cri.go:89] found id: ""
	I0731 21:33:03.320352 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.320361 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:03.320367 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:03.320423 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:03.355185 1147424 cri.go:89] found id: ""
	I0731 21:33:03.355213 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.355222 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:03.355228 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:03.355281 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:03.389900 1147424 cri.go:89] found id: ""
	I0731 21:33:03.389933 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.389941 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:03.389951 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:03.389985 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:03.427299 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:03.427331 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.480994 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:03.481037 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:03.494372 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:03.494403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:03.565542 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:03.565568 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:03.565583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:01.942941 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.943391 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.140871 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:05.141254 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.764762 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.264188 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.146397 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:06.159705 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:06.159791 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:06.195594 1147424 cri.go:89] found id: ""
	I0731 21:33:06.195628 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.195640 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:06.195649 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:06.195726 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:06.230163 1147424 cri.go:89] found id: ""
	I0731 21:33:06.230216 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.230229 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:06.230239 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:06.230313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:06.266937 1147424 cri.go:89] found id: ""
	I0731 21:33:06.266968 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.266979 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:06.266986 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:06.267048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:06.299791 1147424 cri.go:89] found id: ""
	I0731 21:33:06.299828 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.299838 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:06.299849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:06.299906 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:06.333861 1147424 cri.go:89] found id: ""
	I0731 21:33:06.333900 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.333912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:06.333920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:06.333991 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:06.366156 1147424 cri.go:89] found id: ""
	I0731 21:33:06.366196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.366208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:06.366217 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:06.366292 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:06.400567 1147424 cri.go:89] found id: ""
	I0731 21:33:06.400598 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.400607 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:06.400613 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:06.400665 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:06.443745 1147424 cri.go:89] found id: ""
	I0731 21:33:06.443771 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.443782 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:06.443794 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:06.443809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:06.530140 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:06.530189 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.570842 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:06.570883 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:06.621760 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:06.621800 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:06.636562 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:06.636602 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:06.702451 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.203607 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:09.215590 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:09.215678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:09.253063 1147424 cri.go:89] found id: ""
	I0731 21:33:09.253092 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.253101 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:09.253108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:09.253159 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:09.287000 1147424 cri.go:89] found id: ""
	I0731 21:33:09.287036 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.287051 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:09.287060 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:09.287117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:09.321173 1147424 cri.go:89] found id: ""
	I0731 21:33:09.321211 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.321223 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:09.321232 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:09.321287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:09.356860 1147424 cri.go:89] found id: ""
	I0731 21:33:09.356896 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.356908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:09.356918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:09.356979 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:09.390469 1147424 cri.go:89] found id: ""
	I0731 21:33:09.390509 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.390520 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:09.390528 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:09.390601 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:09.426265 1147424 cri.go:89] found id: ""
	I0731 21:33:09.426295 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.426304 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:09.426311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:09.426376 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:09.460197 1147424 cri.go:89] found id: ""
	I0731 21:33:09.460234 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.460246 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:09.460254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:09.460313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:09.492708 1147424 cri.go:89] found id: ""
	I0731 21:33:09.492737 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.492745 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:09.492757 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:09.492769 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:09.543768 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:09.543814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:09.557496 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:09.557531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:09.622956 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.622994 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:09.623012 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:09.700157 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:09.700202 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.443888 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:08.942866 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:07.638676 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.639158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.639719 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.264932 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.763994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:12.238767 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:12.258742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:12.258829 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:12.319452 1147424 cri.go:89] found id: ""
	I0731 21:33:12.319501 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.319514 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:12.319523 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:12.319596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:12.353740 1147424 cri.go:89] found id: ""
	I0731 21:33:12.353777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.353789 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:12.353798 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:12.353872 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:12.387735 1147424 cri.go:89] found id: ""
	I0731 21:33:12.387777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.387790 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:12.387799 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:12.387864 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:12.420145 1147424 cri.go:89] found id: ""
	I0731 21:33:12.420184 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.420196 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:12.420204 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:12.420261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:12.454861 1147424 cri.go:89] found id: ""
	I0731 21:33:12.454899 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.454912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:12.454920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:12.454993 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:12.487910 1147424 cri.go:89] found id: ""
	I0731 21:33:12.487938 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.487946 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:12.487954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:12.488007 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:12.524634 1147424 cri.go:89] found id: ""
	I0731 21:33:12.524663 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.524672 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:12.524678 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:12.524747 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:12.557542 1147424 cri.go:89] found id: ""
	I0731 21:33:12.557572 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.557581 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:12.557592 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:12.557605 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:12.638725 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:12.638767 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:12.675009 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:12.675041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:12.725508 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:12.725556 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:12.739281 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:12.739315 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:12.809186 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:11.443163 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.942775 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.944913 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:14.140466 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:16.639513 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.764068 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:17.764557 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.310278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:15.323392 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:15.323489 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:15.356737 1147424 cri.go:89] found id: ""
	I0731 21:33:15.356768 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.356779 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:15.356794 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:15.356870 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:15.389979 1147424 cri.go:89] found id: ""
	I0731 21:33:15.390018 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.390027 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:15.390033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:15.390097 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:15.422777 1147424 cri.go:89] found id: ""
	I0731 21:33:15.422810 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.422818 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:15.422825 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:15.422880 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:15.457962 1147424 cri.go:89] found id: ""
	I0731 21:33:15.458000 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.458012 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:15.458021 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:15.458088 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:15.495495 1147424 cri.go:89] found id: ""
	I0731 21:33:15.495528 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.495539 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:15.495552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:15.495611 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:15.528671 1147424 cri.go:89] found id: ""
	I0731 21:33:15.528700 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.528709 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:15.528715 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:15.528782 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:15.562579 1147424 cri.go:89] found id: ""
	I0731 21:33:15.562609 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.562617 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:15.562623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:15.562688 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:15.597326 1147424 cri.go:89] found id: ""
	I0731 21:33:15.597362 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.597374 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:15.597387 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:15.597406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:15.611017 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:15.611049 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:15.679729 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:15.679756 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:15.679776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:15.763719 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:15.763764 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:15.801974 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:15.802003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.350340 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:18.362952 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:18.363030 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:18.396153 1147424 cri.go:89] found id: ""
	I0731 21:33:18.396207 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.396218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:18.396227 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:18.396300 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:18.429261 1147424 cri.go:89] found id: ""
	I0731 21:33:18.429291 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.429302 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:18.429311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:18.429386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:18.462056 1147424 cri.go:89] found id: ""
	I0731 21:33:18.462093 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.462105 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:18.462115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:18.462189 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:18.494847 1147424 cri.go:89] found id: ""
	I0731 21:33:18.494887 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.494900 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:18.494908 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:18.494974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:18.527982 1147424 cri.go:89] found id: ""
	I0731 21:33:18.528020 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.528033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:18.528041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:18.528137 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:18.562114 1147424 cri.go:89] found id: ""
	I0731 21:33:18.562148 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.562159 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:18.562168 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:18.562227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:18.600226 1147424 cri.go:89] found id: ""
	I0731 21:33:18.600256 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.600267 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:18.600275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:18.600346 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:18.635899 1147424 cri.go:89] found id: ""
	I0731 21:33:18.635935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.635947 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:18.635960 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:18.635976 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.687338 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:18.687380 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:18.700274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:18.700308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:18.772852 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:18.772882 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:18.772900 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:18.854876 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:18.854919 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:18.442684 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:20.942998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.139878 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.139917 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.764588 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.765547 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:22.759208 1147232 pod_ready.go:81] duration metric: took 4m0.00082409s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	E0731 21:33:22.759249 1147232 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:33:22.759276 1147232 pod_ready.go:38] duration metric: took 4m11.578718686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:33:22.759313 1147232 kubeadm.go:597] duration metric: took 4m19.399292481s to restartPrimaryControlPlane
	W0731 21:33:22.759429 1147232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:22.759478 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:21.392589 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:21.405646 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:21.405767 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:21.441055 1147424 cri.go:89] found id: ""
	I0731 21:33:21.441088 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.441100 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:21.441108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:21.441173 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:21.474545 1147424 cri.go:89] found id: ""
	I0731 21:33:21.474583 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.474593 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:21.474599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:21.474654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:21.506004 1147424 cri.go:89] found id: ""
	I0731 21:33:21.506032 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.506041 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:21.506047 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:21.506115 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:21.539842 1147424 cri.go:89] found id: ""
	I0731 21:33:21.539880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.539893 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:21.539902 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:21.539966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:21.573913 1147424 cri.go:89] found id: ""
	I0731 21:33:21.573943 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.573951 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:21.573958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:21.574012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:21.608677 1147424 cri.go:89] found id: ""
	I0731 21:33:21.608715 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.608727 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:21.608736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:21.608811 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:21.642032 1147424 cri.go:89] found id: ""
	I0731 21:33:21.642063 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.642073 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:21.642082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:21.642146 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:21.676279 1147424 cri.go:89] found id: ""
	I0731 21:33:21.676312 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.676322 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:21.676332 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:21.676346 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:21.688928 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:21.688981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:21.757596 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:21.757620 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:21.757637 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:21.836301 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:21.836350 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:21.873553 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:21.873594 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.427756 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:24.440917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:24.440998 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:24.475902 1147424 cri.go:89] found id: ""
	I0731 21:33:24.475935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.475946 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:24.475954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:24.476031 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:24.509078 1147424 cri.go:89] found id: ""
	I0731 21:33:24.509115 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.509128 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:24.509136 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:24.509205 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:24.542466 1147424 cri.go:89] found id: ""
	I0731 21:33:24.542506 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.542518 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:24.542527 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:24.542589 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:24.579457 1147424 cri.go:89] found id: ""
	I0731 21:33:24.579496 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.579515 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:24.579524 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:24.579596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:24.623843 1147424 cri.go:89] found id: ""
	I0731 21:33:24.623880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.623891 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:24.623899 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:24.623971 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:24.661401 1147424 cri.go:89] found id: ""
	I0731 21:33:24.661437 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.661448 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:24.661457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:24.661526 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:24.694521 1147424 cri.go:89] found id: ""
	I0731 21:33:24.694551 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.694559 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:24.694567 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:24.694657 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:24.730530 1147424 cri.go:89] found id: ""
	I0731 21:33:24.730566 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.730578 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:24.730591 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:24.730607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.801836 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:24.801890 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:24.817753 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:24.817803 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:33:23.444464 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.942484 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:23.140282 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.638870 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	W0731 21:33:24.901125 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:24.901154 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:24.901170 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:24.984008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:24.984054 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:27.533575 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:27.546174 1147424 kubeadm.go:597] duration metric: took 4m1.98040234s to restartPrimaryControlPlane
	W0731 21:33:27.546264 1147424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:27.546291 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:28.848116 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.301779163s)
	I0731 21:33:28.848201 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:28.862706 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:28.872753 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:28.882437 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:28.882467 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:28.882527 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:28.892810 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:28.892893 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:28.901944 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:28.911008 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:28.911089 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:28.920446 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.929557 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:28.929627 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.939095 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:28.948405 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:28.948478 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:28.958084 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:29.033876 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:33:29.033969 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:29.180061 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:29.180208 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:29.180304 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:29.352063 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:29.354698 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:29.354847 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:29.354944 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:29.355065 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:29.355151 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:29.355244 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:29.355344 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:29.355454 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:29.355562 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:29.355675 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:29.355800 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:29.355855 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:29.355906 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:29.657622 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:29.951029 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:30.025514 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:30.502515 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:30.518575 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:30.520148 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:30.520332 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:30.670223 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:27.948560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.442457 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:28.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.139394 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.672807 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:33:30.672945 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:30.681152 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:30.682190 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:30.683416 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:30.688543 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:33:32.942316 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.443021 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:32.639784 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.139844 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.945781 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.442632 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.639625 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.139364 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.942420 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.942739 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.139763 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.639285 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:46.943777 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.442396 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:47.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.139244 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:51.139970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.946266 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.186759545s)
	I0731 21:33:53.946372 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:53.960849 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:53.971957 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:53.981956 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:53.981997 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:53.982061 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:53.991700 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:53.991794 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:54.001558 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:54.010863 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:54.010939 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:54.021132 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.032655 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:54.032745 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.042684 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:54.052522 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:54.052591 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:54.062401 1147232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:54.110034 1147232 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:33:54.110111 1147232 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:54.241728 1147232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:54.241910 1147232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:54.242057 1147232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:54.453017 1147232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:54.454705 1147232 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:54.454822 1147232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:54.459233 1147232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:54.459344 1147232 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:54.459427 1147232 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:54.459525 1147232 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:54.459612 1147232 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:54.459698 1147232 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:54.459807 1147232 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:54.459918 1147232 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:54.460026 1147232 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:54.460083 1147232 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:54.460190 1147232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:54.524149 1147232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:54.777800 1147232 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:33:54.921782 1147232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:55.044166 1147232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:55.204096 1147232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:55.204767 1147232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:55.207263 1147232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:51.442995 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.444424 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.944751 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.639209 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.639317 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.208851 1147232 out.go:204]   - Booting up control plane ...
	I0731 21:33:55.208977 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:55.209090 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:55.209331 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:55.229113 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:55.229800 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:55.229867 1147232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:55.356937 1147232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:33:55.357076 1147232 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:33:55.858979 1147232 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.083488ms
	I0731 21:33:55.859109 1147232 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:34:00.863345 1147232 kubeadm.go:310] [api-check] The API server is healthy after 5.002699171s
	I0731 21:34:00.879484 1147232 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:34:00.894019 1147232 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:34:00.928443 1147232 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:34:00.928739 1147232 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-563652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:34:00.941793 1147232 kubeadm.go:310] [bootstrap-token] Using token: zsizu4.9crnq3d9xqkkbhr5
	I0731 21:33:57.947020 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.442694 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:57.639666 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:59.640630 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:01.640684 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.943202 1147232 out.go:204]   - Configuring RBAC rules ...
	I0731 21:34:00.943358 1147232 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:34:00.951121 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:34:00.959955 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:34:00.963669 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:34:00.967795 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:34:00.972804 1147232 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:34:01.271139 1147232 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:34:01.705953 1147232 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:34:02.269466 1147232 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:34:02.271800 1147232 kubeadm.go:310] 
	I0731 21:34:02.271904 1147232 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:34:02.271915 1147232 kubeadm.go:310] 
	I0731 21:34:02.271994 1147232 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:34:02.272005 1147232 kubeadm.go:310] 
	I0731 21:34:02.272040 1147232 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:34:02.272127 1147232 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:34:02.272206 1147232 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:34:02.272212 1147232 kubeadm.go:310] 
	I0731 21:34:02.272290 1147232 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:34:02.272337 1147232 kubeadm.go:310] 
	I0731 21:34:02.272453 1147232 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:34:02.272477 1147232 kubeadm.go:310] 
	I0731 21:34:02.272557 1147232 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:34:02.272644 1147232 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:34:02.272735 1147232 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:34:02.272751 1147232 kubeadm.go:310] 
	I0731 21:34:02.272871 1147232 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:34:02.272972 1147232 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:34:02.272991 1147232 kubeadm.go:310] 
	I0731 21:34:02.273097 1147232 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273207 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:34:02.273252 1147232 kubeadm.go:310] 	--control-plane 
	I0731 21:34:02.273268 1147232 kubeadm.go:310] 
	I0731 21:34:02.273371 1147232 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:34:02.273381 1147232 kubeadm.go:310] 
	I0731 21:34:02.273492 1147232 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273643 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:34:02.274138 1147232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:34:02.274200 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:34:02.274221 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:34:02.275876 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:34:02.277208 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:34:02.287526 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:34:02.306070 1147232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:34:02.306192 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.306218 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-563652 minikube.k8s.io/updated_at=2024_07_31T21_34_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=embed-certs-563652 minikube.k8s.io/primary=true
	I0731 21:34:02.530554 1147232 ops.go:34] apiserver oom_adj: -16
	I0731 21:34:02.530710 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.031525 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.530812 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:04.030780 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.444274 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.443668 1148013 pod_ready.go:81] duration metric: took 4m0.00729593s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:04.443701 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:04.443712 1148013 pod_ready.go:38] duration metric: took 4m3.607055366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:04.443731 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:04.443795 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:04.443885 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:04.483174 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:04.483203 1148013 cri.go:89] found id: ""
	I0731 21:34:04.483212 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:04.483265 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.488570 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:04.488660 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:04.523705 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:04.523734 1148013 cri.go:89] found id: ""
	I0731 21:34:04.523745 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:04.523816 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.528231 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:04.528304 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:04.565303 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:04.565332 1148013 cri.go:89] found id: ""
	I0731 21:34:04.565341 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:04.565394 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.570089 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:04.570172 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:04.604648 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:04.604676 1148013 cri.go:89] found id: ""
	I0731 21:34:04.604686 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:04.604770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.609219 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:04.609306 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:04.644851 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.644876 1148013 cri.go:89] found id: ""
	I0731 21:34:04.644887 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:04.644954 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.649760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:04.649859 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:04.686438 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:04.686466 1148013 cri.go:89] found id: ""
	I0731 21:34:04.686477 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:04.686546 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.690707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:04.690791 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:04.726245 1148013 cri.go:89] found id: ""
	I0731 21:34:04.726276 1148013 logs.go:276] 0 containers: []
	W0731 21:34:04.726284 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:04.726291 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:04.726346 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:04.766009 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:04.766034 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.766038 1148013 cri.go:89] found id: ""
	I0731 21:34:04.766045 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:04.766105 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.770130 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.774449 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:04.774479 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.822626 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:04.822660 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.857618 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:04.857648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:04.908962 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:04.908993 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:04.962708 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:04.962759 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:04.977232 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:04.977271 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:05.109227 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:05.109264 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:05.163213 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:05.163250 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:05.200524 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:05.200564 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:05.242464 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:05.242501 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:05.278233 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:05.278263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:05.328930 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:05.328975 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:05.367827 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:05.367860 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:04.140237 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:06.641725 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.531795 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.030854 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.530821 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.031777 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.531171 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.030885 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.531555 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.031798 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.531512 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:09.031778 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.349628 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:08.364164 1148013 api_server.go:72] duration metric: took 4m15.266433533s to wait for apiserver process to appear ...
	I0731 21:34:08.364205 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:08.364257 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:08.364321 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:08.398165 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:08.398194 1148013 cri.go:89] found id: ""
	I0731 21:34:08.398205 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:08.398270 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.402707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:08.402780 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:08.444972 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:08.444998 1148013 cri.go:89] found id: ""
	I0731 21:34:08.445007 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:08.445067 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.449385 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:08.449458 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:08.487006 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:08.487040 1148013 cri.go:89] found id: ""
	I0731 21:34:08.487053 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:08.487123 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.491544 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:08.491618 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:08.526239 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:08.526271 1148013 cri.go:89] found id: ""
	I0731 21:34:08.526282 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:08.526334 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.530760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:08.530864 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:08.579799 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:08.579829 1148013 cri.go:89] found id: ""
	I0731 21:34:08.579844 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:08.579910 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.584172 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:08.584244 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:08.624614 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.624689 1148013 cri.go:89] found id: ""
	I0731 21:34:08.624703 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:08.624770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.629264 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:08.629340 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:08.669137 1148013 cri.go:89] found id: ""
	I0731 21:34:08.669170 1148013 logs.go:276] 0 containers: []
	W0731 21:34:08.669181 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:08.669189 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:08.669256 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:08.712145 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.712174 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:08.712179 1148013 cri.go:89] found id: ""
	I0731 21:34:08.712187 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:08.712246 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.717005 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.720992 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:08.721024 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.775824 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:08.775876 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.822904 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:08.822940 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:09.279585 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:09.279641 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:09.328597 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:09.328635 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:09.382901 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:09.382959 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:09.397461 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:09.397500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:09.437452 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:09.437494 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:09.472580 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:09.472614 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:09.512902 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:09.512938 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:09.558351 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:09.558394 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:09.669960 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:09.670001 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:09.714731 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:09.714770 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:09.140243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:11.639122 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:09.531101 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.031417 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.531369 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.031687 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.530902 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.030877 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.531359 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.030850 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.530829 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.030737 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.137727 1147232 kubeadm.go:1113] duration metric: took 11.831600904s to wait for elevateKubeSystemPrivileges
	I0731 21:34:14.137775 1147232 kubeadm.go:394] duration metric: took 5m10.826279216s to StartCluster
	I0731 21:34:14.137810 1147232 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.137941 1147232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:34:14.140680 1147232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.141051 1147232 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:34:14.141091 1147232 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:34:14.141199 1147232 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-563652"
	I0731 21:34:14.141240 1147232 addons.go:69] Setting default-storageclass=true in profile "embed-certs-563652"
	I0731 21:34:14.141263 1147232 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-563652"
	W0731 21:34:14.141272 1147232 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:34:14.141291 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:34:14.141302 1147232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-563652"
	I0731 21:34:14.141309 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141337 1147232 addons.go:69] Setting metrics-server=true in profile "embed-certs-563652"
	I0731 21:34:14.141362 1147232 addons.go:234] Setting addon metrics-server=true in "embed-certs-563652"
	W0731 21:34:14.141373 1147232 addons.go:243] addon metrics-server should already be in state true
	I0731 21:34:14.141400 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141735 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141802 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141745 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141876 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141748 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.142070 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.143403 1147232 out.go:177] * Verifying Kubernetes components...
	I0731 21:34:14.144894 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:34:14.160359 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0731 21:34:14.160405 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0731 21:34:14.160631 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0731 21:34:14.160893 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.160996 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161048 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161478 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161497 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161643 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161657 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161721 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161749 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.162028 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162069 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162029 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162250 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.162557 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162596 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.162654 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162675 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.166106 1147232 addons.go:234] Setting addon default-storageclass=true in "embed-certs-563652"
	W0731 21:34:14.166129 1147232 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:34:14.166153 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.166426 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.166463 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.179941 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I0731 21:34:14.180522 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.181056 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.181077 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.181522 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.181726 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.182994 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0731 21:34:14.183599 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.183753 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.183958 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0731 21:34:14.184127 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.184145 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.184538 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.184645 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185028 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.185047 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.185306 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.185343 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.185458 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185527 1147232 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:34:14.185650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.186884 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:34:14.186912 1147232 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:34:14.186937 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.187442 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.189035 1147232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:34:14.190019 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190617 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.190644 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190680 1147232 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.190700 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:34:14.190725 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.191369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.191607 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.191893 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.192265 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.194023 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194383 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.194407 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.194852 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.195073 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.195233 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.207044 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0731 21:34:14.207748 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.208292 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.208319 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.208759 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.208962 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.210554 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.210881 1147232 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.210902 1147232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:34:14.210925 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.214212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214803 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.215026 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214918 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.216141 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.216369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.216583 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.360826 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:34:14.379220 1147232 node_ready.go:35] waiting up to 6m0s for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387294 1147232 node_ready.go:49] node "embed-certs-563652" has status "Ready":"True"
	I0731 21:34:14.387331 1147232 node_ready.go:38] duration metric: took 8.073597ms for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387344 1147232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.392589 1147232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400252 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.400276 1147232 pod_ready.go:81] duration metric: took 7.654503ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400285 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405540 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.405564 1147232 pod_ready.go:81] duration metric: took 5.273822ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405573 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410097 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.410118 1147232 pod_ready.go:81] duration metric: took 4.539492ms for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410127 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414070 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.414094 1147232 pod_ready.go:81] duration metric: took 3.961128ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414101 1147232 pod_ready.go:38] duration metric: took 26.744925ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.414117 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:14.414166 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:14.427922 1147232 api_server.go:72] duration metric: took 286.820645ms to wait for apiserver process to appear ...
	I0731 21:34:14.427955 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:14.427976 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:34:14.433697 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:34:14.435062 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:14.435088 1147232 api_server.go:131] duration metric: took 7.125728ms to wait for apiserver health ...
	I0731 21:34:14.435096 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:10.689650 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:34:10.690301 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:10.690529 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:14.497864 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.523526 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:34:14.523560 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:34:14.523656 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.552390 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:34:14.552424 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:34:14.586389 1147232 system_pods.go:59] 4 kube-system pods found
	I0731 21:34:14.586421 1147232 system_pods.go:61] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.586426 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.586429 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.586433 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.586440 1147232 system_pods.go:74] duration metric: took 151.337561ms to wait for pod list to return data ...
	I0731 21:34:14.586448 1147232 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:14.613255 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.613292 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:34:14.677966 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.728484 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.728522 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.728906 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.728971 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.728992 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729005 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.729016 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.729280 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.729300 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729315 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736315 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.736340 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.736605 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736611 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.736628 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.783127 1147232 default_sa.go:45] found service account: "default"
	I0731 21:34:14.783169 1147232 default_sa.go:55] duration metric: took 196.713133ms for default service account to be created ...
	I0731 21:34:14.783181 1147232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:14.998421 1147232 system_pods.go:86] 5 kube-system pods found
	I0731 21:34:14.998459 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.998467 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.998476 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.998483 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending
	I0731 21:34:14.998488 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.998528 1147232 retry.go:31] will retry after 204.720881ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.239227 1147232 system_pods.go:86] 7 kube-system pods found
	I0731 21:34:15.239260 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending
	I0731 21:34:15.239268 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.239275 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.239281 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.239285 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.239291 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.239295 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.239316 1147232 retry.go:31] will retry after 274.032375ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.470562 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.470596 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.470970 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471046 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471059 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.471070 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.471082 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.471345 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471384 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471395 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.530409 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.530454 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530467 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530475 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.530483 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.530493 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.530501 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.530510 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.530540 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.530554 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.530575 1147232 retry.go:31] will retry after 306.456007ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.572796 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.572829 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573170 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573210 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573232 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.573245 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573542 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.573591 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573612 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573631 1147232 addons.go:475] Verifying addon metrics-server=true in "embed-certs-563652"
	I0731 21:34:15.576124 1147232 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0731 21:34:12.254258 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:34:12.259093 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:34:12.260261 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:12.260290 1148013 api_server.go:131] duration metric: took 3.896077544s to wait for apiserver health ...
	I0731 21:34:12.260299 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:12.260325 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:12.260383 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:12.302317 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.302350 1148013 cri.go:89] found id: ""
	I0731 21:34:12.302361 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:12.302435 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.306733 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:12.306821 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:12.342694 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:12.342719 1148013 cri.go:89] found id: ""
	I0731 21:34:12.342728 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:12.342788 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.346762 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:12.346848 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:12.382747 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.382772 1148013 cri.go:89] found id: ""
	I0731 21:34:12.382782 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:12.382851 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.386891 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:12.386988 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:12.424735 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.424768 1148013 cri.go:89] found id: ""
	I0731 21:34:12.424777 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:12.424842 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.430109 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:12.430193 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:12.466432 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.466457 1148013 cri.go:89] found id: ""
	I0731 21:34:12.466464 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:12.466520 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.470677 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:12.470761 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:12.509821 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:12.509847 1148013 cri.go:89] found id: ""
	I0731 21:34:12.509858 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:12.509926 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.514114 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:12.514199 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:12.560780 1148013 cri.go:89] found id: ""
	I0731 21:34:12.560810 1148013 logs.go:276] 0 containers: []
	W0731 21:34:12.560831 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:12.560841 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:12.560911 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:12.611528 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:12.611560 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.611566 1148013 cri.go:89] found id: ""
	I0731 21:34:12.611575 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:12.611643 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.615972 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.620046 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:12.620072 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:12.733715 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:12.733761 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.785864 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:12.785915 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.829467 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:12.829510 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.867566 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:12.867599 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.908038 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:12.908073 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.945425 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:12.945471 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:12.994911 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:12.994948 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:13.061451 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:13.061500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:13.107896 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:13.107947 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:13.164585 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:13.164627 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:13.206615 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:13.206648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:13.587405 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:13.587453 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:16.108951 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:16.108985 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.108990 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.108994 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.108998 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.109001 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.109004 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.109010 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.109015 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.109023 1148013 system_pods.go:74] duration metric: took 3.848717497s to wait for pod list to return data ...
	I0731 21:34:16.109031 1148013 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:16.112076 1148013 default_sa.go:45] found service account: "default"
	I0731 21:34:16.112124 1148013 default_sa.go:55] duration metric: took 3.083038ms for default service account to be created ...
	I0731 21:34:16.112135 1148013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:16.118191 1148013 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:16.118232 1148013 system_pods.go:89] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.118242 1148013 system_pods.go:89] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.118250 1148013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.118256 1148013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.118263 1148013 system_pods.go:89] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.118269 1148013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.118303 1148013 system_pods.go:89] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.118321 1148013 system_pods.go:89] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.118333 1148013 system_pods.go:126] duration metric: took 6.190349ms to wait for k8s-apps to be running ...
	I0731 21:34:16.118344 1148013 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.118404 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.137723 1148013 system_svc.go:56] duration metric: took 19.365234ms WaitForService to wait for kubelet
	I0731 21:34:16.137753 1148013 kubeadm.go:582] duration metric: took 4m23.040028763s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.137781 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.141708 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.141737 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.141805 1148013 node_conditions.go:105] duration metric: took 4.017229ms to run NodePressure ...
	I0731 21:34:16.141831 1148013 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.141849 1148013 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.141868 1148013 start.go:255] writing updated cluster config ...
	I0731 21:34:16.142163 1148013 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.203520 1148013 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.205072 1148013 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-755535" cluster and "default" namespace by default
	I0731 21:34:13.639431 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.640300 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.577285 1147232 addons.go:510] duration metric: took 1.436190545s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0731 21:34:15.848446 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.848480 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848487 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848496 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.848502 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.848507 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.848512 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.848516 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.848522 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.848527 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.848545 1147232 retry.go:31] will retry after 538.9255ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.397869 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.397924 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397937 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397946 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.397954 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.397962 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.397972 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:16.397979 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.397989 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.398003 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:16.398152 1147232 retry.go:31] will retry after 511.77725ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.917181 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.917219 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Running
	I0731 21:34:16.917228 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Running
	I0731 21:34:16.917234 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.917240 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.917248 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.917256 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Running
	I0731 21:34:16.917261 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.917272 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.917279 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Running
	I0731 21:34:16.917295 1147232 system_pods.go:126] duration metric: took 2.134102549s to wait for k8s-apps to be running ...
	I0731 21:34:16.917310 1147232 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.917365 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.932647 1147232 system_svc.go:56] duration metric: took 15.322111ms WaitForService to wait for kubelet
	I0731 21:34:16.932702 1147232 kubeadm.go:582] duration metric: took 2.791596331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.932730 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.935567 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.935589 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.935600 1147232 node_conditions.go:105] duration metric: took 2.864432ms to run NodePressure ...
	I0731 21:34:16.935614 1147232 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.935621 1147232 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.935631 1147232 start.go:255] writing updated cluster config ...
	I0731 21:34:16.935948 1147232 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.990670 1147232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.992682 1147232 out.go:177] * Done! kubectl is now configured to use "embed-certs-563652" cluster and "default" namespace by default
	I0731 21:34:15.690878 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:15.691156 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:18.139818 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:20.639113 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:23.140314 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.641086 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.691455 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:25.691639 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:28.139044 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:30.140499 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:32.640931 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:35.139207 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:36.640291 1146656 pod_ready.go:81] duration metric: took 4m0.007535985s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:36.640323 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:36.640334 1146656 pod_ready.go:38] duration metric: took 4m7.419160814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:36.640354 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:36.640393 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:36.640454 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:36.688629 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:36.688658 1146656 cri.go:89] found id: ""
	I0731 21:34:36.688668 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:36.688747 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.693261 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:36.693349 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:36.730997 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:36.731021 1146656 cri.go:89] found id: ""
	I0731 21:34:36.731028 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:36.731079 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.737624 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:36.737692 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:36.780734 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:36.780758 1146656 cri.go:89] found id: ""
	I0731 21:34:36.780769 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:36.780831 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.784767 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:36.784839 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:36.824129 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:36.824164 1146656 cri.go:89] found id: ""
	I0731 21:34:36.824174 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:36.824246 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.828299 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:36.828380 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:36.863976 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:36.864008 1146656 cri.go:89] found id: ""
	I0731 21:34:36.864017 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:36.864081 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.868516 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:36.868594 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:36.903106 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:36.903137 1146656 cri.go:89] found id: ""
	I0731 21:34:36.903148 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:36.903212 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.907260 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:36.907327 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:36.943921 1146656 cri.go:89] found id: ""
	I0731 21:34:36.943955 1146656 logs.go:276] 0 containers: []
	W0731 21:34:36.943963 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:36.943969 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:36.944025 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:36.979295 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:36.979327 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:36.979334 1146656 cri.go:89] found id: ""
	I0731 21:34:36.979345 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:36.979403 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.984464 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.988471 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:36.988511 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:37.121952 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:37.121995 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:37.169494 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:37.169546 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:37.205544 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:37.205577 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:37.255892 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:37.255930 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:37.292002 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:37.292036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:37.327852 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:37.327881 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:37.367753 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:37.367795 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:37.419399 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:37.419443 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:37.432894 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:37.432938 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:37.474408 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:37.474454 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:37.508203 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:37.508246 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:37.550030 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:37.550072 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:40.551728 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:40.566959 1146656 api_server.go:72] duration metric: took 4m19.080511832s to wait for apiserver process to appear ...
	I0731 21:34:40.567027 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:40.567085 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:40.567153 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:40.617492 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:40.617529 1146656 cri.go:89] found id: ""
	I0731 21:34:40.617539 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:40.617605 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.621950 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:40.622023 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:40.664964 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:40.664990 1146656 cri.go:89] found id: ""
	I0731 21:34:40.664998 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:40.665052 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.669257 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:40.669353 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:40.705806 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:40.705842 1146656 cri.go:89] found id: ""
	I0731 21:34:40.705854 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:40.705920 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.710069 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:40.710146 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:40.746331 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:40.746358 1146656 cri.go:89] found id: ""
	I0731 21:34:40.746368 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:40.746420 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.754270 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:40.754364 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:40.791320 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:40.791356 1146656 cri.go:89] found id: ""
	I0731 21:34:40.791367 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:40.791435 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.795691 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:40.795773 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:40.835548 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:40.835578 1146656 cri.go:89] found id: ""
	I0731 21:34:40.835589 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:40.835652 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.839854 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:40.839939 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:40.874322 1146656 cri.go:89] found id: ""
	I0731 21:34:40.874358 1146656 logs.go:276] 0 containers: []
	W0731 21:34:40.874369 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:40.874379 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:40.874448 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:40.922665 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:40.922691 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.922695 1146656 cri.go:89] found id: ""
	I0731 21:34:40.922703 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:40.922762 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.926750 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.930612 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:40.930640 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.966656 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:40.966695 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:41.401560 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:41.401622 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:41.503991 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:41.504036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:41.552765 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:41.552816 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:41.588315 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:41.588353 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:41.639790 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:41.639832 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:41.679851 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:41.679891 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:41.716182 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:41.716219 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:41.762445 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:41.762493 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:41.815762 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:41.815810 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:41.829753 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:41.829794 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:41.874703 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:41.874745 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.415559 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:34:44.420498 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:34:44.421648 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:34:44.421678 1146656 api_server.go:131] duration metric: took 3.854640091s to wait for apiserver health ...
	I0731 21:34:44.421690 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:44.421724 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:44.421786 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:44.456716 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.456744 1146656 cri.go:89] found id: ""
	I0731 21:34:44.456755 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:44.456809 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.460762 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:44.460836 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:44.498325 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.498352 1146656 cri.go:89] found id: ""
	I0731 21:34:44.498361 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:44.498416 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.502344 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:44.502424 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:44.538766 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.538799 1146656 cri.go:89] found id: ""
	I0731 21:34:44.538809 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:44.538874 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.542853 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:44.542946 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:44.578142 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:44.578175 1146656 cri.go:89] found id: ""
	I0731 21:34:44.578185 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:44.578241 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.582494 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:44.582574 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:44.631110 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:44.631141 1146656 cri.go:89] found id: ""
	I0731 21:34:44.631149 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:44.631208 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.635618 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:44.635693 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:44.669607 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:44.669633 1146656 cri.go:89] found id: ""
	I0731 21:34:44.669643 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:44.669702 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.673967 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:44.674052 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:44.723388 1146656 cri.go:89] found id: ""
	I0731 21:34:44.723417 1146656 logs.go:276] 0 containers: []
	W0731 21:34:44.723426 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:44.723433 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:44.723485 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:44.759398 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:44.759423 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:44.759429 1146656 cri.go:89] found id: ""
	I0731 21:34:44.759438 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:44.759506 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.765787 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.769602 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:44.769627 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:44.783608 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:44.783646 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:44.897376 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:44.897415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.941518 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:44.941558 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.976285 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:44.976319 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:45.015310 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:45.015343 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:45.076253 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:45.076298 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:45.114621 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:45.114656 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:45.171369 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:45.171415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:45.219450 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:45.219492 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:45.254864 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:45.254901 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:45.289962 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:45.289999 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:45.660050 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:45.660113 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:48.211383 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:48.211418 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.211423 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.211427 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.211431 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.211435 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.211440 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.211449 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.211456 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.211467 1146656 system_pods.go:74] duration metric: took 3.789769058s to wait for pod list to return data ...
	I0731 21:34:48.211490 1146656 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:48.214462 1146656 default_sa.go:45] found service account: "default"
	I0731 21:34:48.214492 1146656 default_sa.go:55] duration metric: took 2.992385ms for default service account to be created ...
	I0731 21:34:48.214501 1146656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:48.220257 1146656 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:48.220289 1146656 system_pods.go:89] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.220295 1146656 system_pods.go:89] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.220299 1146656 system_pods.go:89] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.220304 1146656 system_pods.go:89] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.220309 1146656 system_pods.go:89] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.220313 1146656 system_pods.go:89] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.220322 1146656 system_pods.go:89] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.220328 1146656 system_pods.go:89] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.220339 1146656 system_pods.go:126] duration metric: took 5.831037ms to wait for k8s-apps to be running ...
	I0731 21:34:48.220352 1146656 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:48.220404 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:48.235707 1146656 system_svc.go:56] duration metric: took 15.341391ms WaitForService to wait for kubelet
	I0731 21:34:48.235747 1146656 kubeadm.go:582] duration metric: took 4m26.749308267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:48.235769 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:48.239352 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:48.239377 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:48.239388 1146656 node_conditions.go:105] duration metric: took 3.614275ms to run NodePressure ...
	I0731 21:34:48.239400 1146656 start.go:241] waiting for startup goroutines ...
	I0731 21:34:48.239407 1146656 start.go:246] waiting for cluster config update ...
	I0731 21:34:48.239418 1146656 start.go:255] writing updated cluster config ...
	I0731 21:34:48.239724 1146656 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:48.291567 1146656 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:34:48.293377 1146656 out.go:177] * Done! kubectl is now configured to use "no-preload-018891" cluster and "default" namespace by default
	I0731 21:34:45.692895 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:45.693194 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695071 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:35:25.695336 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695369 1147424 kubeadm.go:310] 
	I0731 21:35:25.695432 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:35:25.695496 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:35:25.695506 1147424 kubeadm.go:310] 
	I0731 21:35:25.695560 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:35:25.695606 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:35:25.695752 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:35:25.695775 1147424 kubeadm.go:310] 
	I0731 21:35:25.695866 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:35:25.695914 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:35:25.695965 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:35:25.695972 1147424 kubeadm.go:310] 
	I0731 21:35:25.696064 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:35:25.696197 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:35:25.696218 1147424 kubeadm.go:310] 
	I0731 21:35:25.696389 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:35:25.696510 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:35:25.696637 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:35:25.696739 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:35:25.696761 1147424 kubeadm.go:310] 
	I0731 21:35:25.697342 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:35:25.697447 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:35:25.697582 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:35:25.697782 1147424 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:35:25.697852 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:35:31.094319 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.396429611s)
	I0731 21:35:31.094410 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:35:31.109019 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:35:31.118415 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:35:31.118447 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:35:31.118512 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:35:31.129005 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:35:31.129097 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:35:31.139701 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:35:31.149483 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:35:31.149565 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:35:31.158699 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.168151 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:35:31.168225 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.177911 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:35:31.186739 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:35:31.186821 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:35:31.196779 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:35:31.410613 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:37:27.101986 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:37:27.102135 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:37:27.103680 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:37:27.103742 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:37:27.103874 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:37:27.103971 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:37:27.104056 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:37:27.104135 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:37:27.105757 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:37:27.105851 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:37:27.105911 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:37:27.105982 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:37:27.106047 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:37:27.106126 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:37:27.106185 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:37:27.106256 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:37:27.106340 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:37:27.106446 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:37:27.106527 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:37:27.106582 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:37:27.106669 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:37:27.106747 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:37:27.106800 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:37:27.106853 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:37:27.106928 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:37:27.107053 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:37:27.107169 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:37:27.107233 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:37:27.107307 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:37:27.108810 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:37:27.108897 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:37:27.108964 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:37:27.109022 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:37:27.109090 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:37:27.109227 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:37:27.109276 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:37:27.109346 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109569 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109655 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109876 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109947 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110108 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110172 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110334 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110393 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110549 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110556 1147424 kubeadm.go:310] 
	I0731 21:37:27.110589 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:37:27.110626 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:37:27.110632 1147424 kubeadm.go:310] 
	I0731 21:37:27.110661 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:37:27.110707 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:37:27.110804 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:37:27.110816 1147424 kubeadm.go:310] 
	I0731 21:37:27.110920 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:37:27.110965 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:37:27.110999 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:37:27.111006 1147424 kubeadm.go:310] 
	I0731 21:37:27.111099 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:37:27.111173 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:37:27.111181 1147424 kubeadm.go:310] 
	I0731 21:37:27.111284 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:37:27.111357 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:37:27.111421 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:37:27.111501 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:37:27.111545 1147424 kubeadm.go:310] 
	I0731 21:37:27.111591 1147424 kubeadm.go:394] duration metric: took 8m1.593977042s to StartCluster
	I0731 21:37:27.111642 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:37:27.111732 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:37:27.151036 1147424 cri.go:89] found id: ""
	I0731 21:37:27.151080 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.151092 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:37:27.151101 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:37:27.151164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:37:27.189839 1147424 cri.go:89] found id: ""
	I0731 21:37:27.189877 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.189897 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:37:27.189906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:37:27.189975 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:37:27.224515 1147424 cri.go:89] found id: ""
	I0731 21:37:27.224553 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.224566 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:37:27.224574 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:37:27.224637 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:37:27.256890 1147424 cri.go:89] found id: ""
	I0731 21:37:27.256927 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.256939 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:37:27.256948 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:37:27.257017 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:37:27.292320 1147424 cri.go:89] found id: ""
	I0731 21:37:27.292360 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.292373 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:37:27.292380 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:37:27.292448 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:37:27.327537 1147424 cri.go:89] found id: ""
	I0731 21:37:27.327580 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.327591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:37:27.327600 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:37:27.327669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:37:27.362489 1147424 cri.go:89] found id: ""
	I0731 21:37:27.362522 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.362533 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:37:27.362541 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:37:27.362612 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:37:27.398531 1147424 cri.go:89] found id: ""
	I0731 21:37:27.398575 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.398587 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:37:27.398605 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:37:27.398625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:37:27.412082 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:37:27.412129 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:37:27.485574 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:37:27.485598 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:37:27.485615 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:37:27.602979 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:37:27.603026 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:37:27.642075 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:37:27.642108 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 21:37:27.692811 1147424 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:37:27.692868 1147424 out.go:239] * 
	W0731 21:37:27.692944 1147424 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.692968 1147424 out.go:239] * 
	W0731 21:37:27.693763 1147424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:37:27.697049 1147424 out.go:177] 
	W0731 21:37:27.698454 1147424 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.698525 1147424 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:37:27.698564 1147424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:37:27.700008 1147424 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.483588490Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-9w4w4,Uid:a8ee0da2-837d-46d8-9615-1021a5ad28b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461434588462260,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:30:18.702493657Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&PodSandboxMetadata{Name:busybox,Uid:67c16d33-f140-4fe1-addb-121b6e20e72b,Namespace:default,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1722461434587981224,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:30:18.702492148Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7de9c8e5c3a24d855911e64d3333064930cc04825eeaaac0d79f1c359d660db6,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-c7lxw,Uid:6b18e5a9-5996-4650-97ea-204405ba9d89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461426788043534,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-c7lxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b18e5a9-5996-4650-97ea-204405ba9d89,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:30:18.7
02490895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&PodSandboxMetadata{Name:kube-proxy-x2dnn,Uid:3a6403e5-f31e-4e5a-ba4f-32bc746c18ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461419017838212,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba4f-32bc746c18ec,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:30:18.702482304Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:35fc2f0d-7f78-4a87-83a1-94558267b235,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461419015031173,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-07-31T21:30:18.702499048Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-018891,Uid:727de0fa3f6cbe53a76c06f29db5f604,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461415237029066,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.246:2379,kubernetes.io/config.hash: 727de0fa3f6cbe53a76c06f29db5f604,kubernetes.io/config.seen: 2024-07-31T21:30:14.765927536Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-018891,
Uid:8b4479d2ecc9e7e300e8902502640890,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461415227920676,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8b4479d2ecc9e7e300e8902502640890,kubernetes.io/config.seen: 2024-07-31T21:30:14.708043810Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-018891,Uid:80797f1f899f51d1cec6afc7d6cb6f43,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461415218657223,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.246:8443,kubernetes.io/config.hash: 80797f1f899f51d1cec6afc7d6cb6f43,kubernetes.io/config.seen: 2024-07-31T21:30:14.708037754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-018891,Uid:e1cef0270e9353f8805fb0506ba7f946,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461415211419282,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e1cef0270e9353f8805fb0506ba7f946,ku
bernetes.io/config.seen: 2024-07-31T21:30:14.708042683Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3505e5a9-54bb-4478-aeb8-703e3739aa61 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.484318896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdda3a4f-6d74-47e8-999b-6121e33c4700 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.484374461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdda3a4f-6d74-47e8-999b-6121e33c4700 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.484586473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6da8e27e2fa414b0ec1ec07b849a6b9bd3f21d8d1bea8f30782dbe5b75d8f96e,PodSandboxId:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461436522766919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88,PodSandboxId:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461434913684377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461419851608434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca,PodSandboxId:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722461419141635958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba
4f-32bc746c18ec,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461419133390409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b2
35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6,PodSandboxId:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722461415424779797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3,PodSandboxId:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722461415428780244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618,PodSandboxId:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722461415443704741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396,PodSandboxId:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722461415380473936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdda3a4f-6d74-47e8-999b-6121e33c4700 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.495539991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ed4d6ce-dd78-45d2-9afd-863d85069187 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.495613247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ed4d6ce-dd78-45d2-9afd-863d85069187 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.504105260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1fb5af4-4784-4ba4-b65b-75658ffaf6a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.504748340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462230504718068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1fb5af4-4784-4ba4-b65b-75658ffaf6a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.505354273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a5f0186-294e-4aaa-9060-e76dff4196cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.505410054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a5f0186-294e-4aaa-9060-e76dff4196cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.505586009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6da8e27e2fa414b0ec1ec07b849a6b9bd3f21d8d1bea8f30782dbe5b75d8f96e,PodSandboxId:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461436522766919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88,PodSandboxId:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461434913684377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461419851608434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca,PodSandboxId:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722461419141635958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba
4f-32bc746c18ec,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461419133390409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b2
35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6,PodSandboxId:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722461415424779797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3,PodSandboxId:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722461415428780244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618,PodSandboxId:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722461415443704741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396,PodSandboxId:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722461415380473936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a5f0186-294e-4aaa-9060-e76dff4196cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.545736328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=717b0740-0e14-4f8d-b221-4a444da9029a name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.545831797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=717b0740-0e14-4f8d-b221-4a444da9029a name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.547436103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26578489-ee49-4f28-963e-ce447afb2044 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.548042522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462230548008541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26578489-ee49-4f28-963e-ce447afb2044 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.550536615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=206547be-8a8c-481a-b938-fef843bfda7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.550638676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=206547be-8a8c-481a-b938-fef843bfda7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.550934239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6da8e27e2fa414b0ec1ec07b849a6b9bd3f21d8d1bea8f30782dbe5b75d8f96e,PodSandboxId:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461436522766919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88,PodSandboxId:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461434913684377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461419851608434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca,PodSandboxId:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722461419141635958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba
4f-32bc746c18ec,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461419133390409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b2
35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6,PodSandboxId:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722461415424779797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3,PodSandboxId:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722461415428780244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618,PodSandboxId:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722461415443704741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396,PodSandboxId:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722461415380473936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=206547be-8a8c-481a-b938-fef843bfda7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.584712640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94b2f7ac-14a0-47d7-9726-673096f4d69e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.584789608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94b2f7ac-14a0-47d7-9726-673096f4d69e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.586153684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=168b2a34-2378-4ae7-aa9a-92f78d8fc2e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.586574658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462230586550962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=168b2a34-2378-4ae7-aa9a-92f78d8fc2e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.587166427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=243a0a0b-f2f2-4310-829b-2361b8769643 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.587223362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=243a0a0b-f2f2-4310-829b-2361b8769643 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:43:50 no-preload-018891 crio[721]: time="2024-07-31 21:43:50.587436795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6da8e27e2fa414b0ec1ec07b849a6b9bd3f21d8d1bea8f30782dbe5b75d8f96e,PodSandboxId:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461436522766919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88,PodSandboxId:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461434913684377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461419851608434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca,PodSandboxId:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722461419141635958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba
4f-32bc746c18ec,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461419133390409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b2
35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6,PodSandboxId:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722461415424779797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3,PodSandboxId:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722461415428780244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618,PodSandboxId:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722461415443704741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396,PodSandboxId:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722461415380473936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=243a0a0b-f2f2-4310-829b-2361b8769643 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6da8e27e2fa41       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b8150e18accbb       busybox
	efba76f74230d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   658154f080370       coredns-5cfdc65f69-9w4w4
	a4d6f8d417836       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       4                   984ba1c1bd42f       storage-provisioner
	1aa83cc70feca       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      13 minutes ago      Running             kube-proxy                1                   575654bf126ec       kube-proxy-x2dnn
	c579a97b62d1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       3                   984ba1c1bd42f       storage-provisioner
	e71c179bd22e9       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      13 minutes ago      Running             kube-scheduler            1                   066237e6eb604       kube-scheduler-no-preload-018891
	8d94e11c56302       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      13 minutes ago      Running             kube-controller-manager   1                   05f4f00f9ac91       kube-controller-manager-no-preload-018891
	d614beb36e5ab       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      13 minutes ago      Running             etcd                      1                   1120fbbd2a389       etcd-no-preload-018891
	a11eb6669e85e       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      13 minutes ago      Running             kube-apiserver            1                   4a7176ca61e62       kube-apiserver-no-preload-018891
	
	
	==> coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37983 - 17262 "HINFO IN 7894977547777157273.8102779924257215395. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021019267s
	
	
	==> describe nodes <==
	Name:               no-preload-018891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-018891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=no-preload-018891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_20_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:20:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-018891
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:43:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:41:00 +0000   Wed, 31 Jul 2024 21:20:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:41:00 +0000   Wed, 31 Jul 2024 21:20:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:41:00 +0000   Wed, 31 Jul 2024 21:20:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:41:00 +0000   Wed, 31 Jul 2024 21:30:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.246
	  Hostname:    no-preload-018891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad9f5af829224e0ca46f9d3d9a20647b
	  System UUID:                ad9f5af8-2922-4e0c-a46f-9d3d9a20647b
	  Boot ID:                    1d1d9902-9814-4a48-ab99-e976437c2299
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5cfdc65f69-9w4w4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-018891                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-018891             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-018891    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-x2dnn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-018891             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-78fcd8795b-c7lxw              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-018891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-018891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-018891 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-018891 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-018891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-018891 status is now: NodeHasSufficientPID
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeReady                23m                kubelet          Node no-preload-018891 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-018891 event: Registered Node no-preload-018891 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-018891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-018891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-018891 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-018891 event: Registered Node no-preload-018891 in Controller
	
	
	==> dmesg <==
	[Jul31 21:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048838] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037940] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.054239] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.944818] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543149] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.136393] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.060563] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072215] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.185917] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.147171] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.311239] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Jul31 21:30] systemd-fstab-generator[1167]: Ignoring "noauto" option for root device
	[  +0.056877] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.903083] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +4.564100] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.492811] systemd-fstab-generator[1963]: Ignoring "noauto" option for root device
	[  +5.243864] kauditd_printk_skb: 66 callbacks suppressed
	[  +7.799549] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] <==
	{"level":"info","ts":"2024-07-31T21:30:15.969738Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:30:15.969812Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:30:15.970057Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f649e0b6c01be2c4","local-member-id":"c9a5eb5753c44688","added-peer-id":"c9a5eb5753c44688","added-peer-peer-urls":["https://192.168.61.246:2380"]}
	{"level":"info","ts":"2024-07-31T21:30:15.970297Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f649e0b6c01be2c4","local-member-id":"c9a5eb5753c44688","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:30:15.972424Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:30:15.970336Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.246:2380"}
	{"level":"info","ts":"2024-07-31T21:30:17.198774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:30:17.19886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:30:17.198901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 received MsgPreVoteResp from c9a5eb5753c44688 at term 2"}
	{"level":"info","ts":"2024-07-31T21:30:17.198948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:30:17.19896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 received MsgVoteResp from c9a5eb5753c44688 at term 3"}
	{"level":"info","ts":"2024-07-31T21:30:17.198976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:30:17.198983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9a5eb5753c44688 elected leader c9a5eb5753c44688 at term 3"}
	{"level":"info","ts":"2024-07-31T21:30:17.209986Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c9a5eb5753c44688","local-member-attributes":"{Name:no-preload-018891 ClientURLs:[https://192.168.61.246:2379]}","request-path":"/0/members/c9a5eb5753c44688/attributes","cluster-id":"f649e0b6c01be2c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:30:17.210049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:30:17.210399Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:30:17.210455Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:30:17.210438Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:30:17.211105Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:30:17.211316Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:30:17.211991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.246:2379"}
	{"level":"info","ts":"2024-07-31T21:30:17.212198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:40:17.241063Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2024-07-31T21:40:17.249367Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":860,"took":"7.885289ms","hash":221528161,"current-db-size-bytes":2723840,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2723840,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-31T21:40:17.249468Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":221528161,"revision":860,"compact-revision":-1}
	
	
	==> kernel <==
	 21:43:50 up 14 min,  0 users,  load average: 0.03, 0.09, 0.09
	Linux no-preload-018891 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0731 21:40:19.504814       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:40:19.505041       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0731 21:40:19.506119       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:40:19.506190       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:41:19.506688       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:41:19.506892       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0731 21:41:19.506816       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:41:19.506997       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0731 21:41:19.508201       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:41:19.508224       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:43:19.509172       1 handler_proxy.go:99] no RequestInfo found in the context
	W0731 21:43:19.509188       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:43:19.509443       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0731 21:43:19.509509       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 21:43:19.510675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:43:19.510745       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] <==
	E0731 21:38:22.654403       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:38:23.183585       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:38:52.660513       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:38:53.190910       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:39:22.666627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:39:23.198418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:39:52.671780       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:39:53.206053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:40:22.677767       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:40:23.213901       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:40:52.683786       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:40:53.221356       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:41:00.064609       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-018891"
	E0731 21:41:22.690373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:41:23.229631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:41:38.800261       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="86.843µs"
	E0731 21:41:52.695893       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:41:53.235998       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:41:53.792364       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="128.582µs"
	E0731 21:42:22.702558       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:42:23.243776       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:42:52.708081       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:42:53.251594       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:43:22.714445       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:43:23.259874       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 21:30:19.341735       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 21:30:19.351979       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.246"]
	E0731 21:30:19.352064       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 21:30:19.391382       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 21:30:19.391439       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:30:19.391479       1 server_linux.go:170] "Using iptables Proxier"
	I0731 21:30:19.396114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 21:30:19.396904       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 21:30:19.397392       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:30:19.415226       1 config.go:197] "Starting service config controller"
	I0731 21:30:19.415409       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:30:19.415484       1 config.go:104] "Starting endpoint slice config controller"
	I0731 21:30:19.415514       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:30:19.420381       1 config.go:326] "Starting node config controller"
	I0731 21:30:19.420416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:30:19.515518       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:30:19.515711       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:30:19.521584       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] <==
	I0731 21:30:16.525789       1 serving.go:386] Generated self-signed cert in-memory
	W0731 21:30:18.486935       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:30:18.487007       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:30:18.487017       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:30:18.487026       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:30:18.571690       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 21:30:18.571724       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:30:18.578433       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:30:18.579437       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:30:18.579626       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:30:18.579789       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0731 21:30:18.680201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:41:23 no-preload-018891 kubelet[1295]: E0731 21:41:23.790563    1295 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 21:41:23 no-preload-018891 kubelet[1295]: E0731 21:41:23.790610    1295 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 21:41:23 no-preload-018891 kubelet[1295]: E0731 21:41:23.790749    1295 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7g47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-78fcd8795b-c7lxw_kube-system(6b18e5a9-5996-4650-97ea-204405ba9d89): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jul 31 21:41:23 no-preload-018891 kubelet[1295]: E0731 21:41:23.792191    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:41:38 no-preload-018891 kubelet[1295]: E0731 21:41:38.779505    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:41:53 no-preload-018891 kubelet[1295]: E0731 21:41:53.777814    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:42:07 no-preload-018891 kubelet[1295]: E0731 21:42:07.777882    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:42:14 no-preload-018891 kubelet[1295]: E0731 21:42:14.801892    1295 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:42:14 no-preload-018891 kubelet[1295]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:42:14 no-preload-018891 kubelet[1295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:42:14 no-preload-018891 kubelet[1295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:42:14 no-preload-018891 kubelet[1295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:42:19 no-preload-018891 kubelet[1295]: E0731 21:42:19.777083    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:42:32 no-preload-018891 kubelet[1295]: E0731 21:42:32.777564    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:42:47 no-preload-018891 kubelet[1295]: E0731 21:42:47.778107    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:43:01 no-preload-018891 kubelet[1295]: E0731 21:43:01.777157    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:43:14 no-preload-018891 kubelet[1295]: E0731 21:43:14.779002    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:43:14 no-preload-018891 kubelet[1295]: E0731 21:43:14.799514    1295 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:43:14 no-preload-018891 kubelet[1295]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:43:14 no-preload-018891 kubelet[1295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:43:14 no-preload-018891 kubelet[1295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:43:14 no-preload-018891 kubelet[1295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:43:25 no-preload-018891 kubelet[1295]: E0731 21:43:25.778358    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:43:38 no-preload-018891 kubelet[1295]: E0731 21:43:38.778116    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:43:49 no-preload-018891 kubelet[1295]: E0731 21:43:49.778776    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	
	
	==> storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] <==
	I0731 21:30:19.942627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:30:19.954044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:30:19.954095       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:30:37.358346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:30:37.358999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80815874-157f-46a3-99c5-ff3e7bda36cc", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-018891_f9f576b2-ef8c-4f4a-9658-c155db924368 became leader
	I0731 21:30:37.359219       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-018891_f9f576b2-ef8c-4f4a-9658-c155db924368!
	I0731 21:30:37.460090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-018891_f9f576b2-ef8c-4f4a-9658-c155db924368!
	
	
	==> storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] <==
	I0731 21:30:19.247812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 21:30:19.250385       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-018891 -n no-preload-018891
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-018891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-c7lxw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-018891 describe pod metrics-server-78fcd8795b-c7lxw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-018891 describe pod metrics-server-78fcd8795b-c7lxw: exit status 1 (65.998749ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-c7lxw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-018891 describe pod metrics-server-78fcd8795b-c7lxw: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
E0731 21:39:31.357437 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
E0731 21:42:00.018908 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
E0731 21:44:31.357914 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
E0731 21:45:03.065328 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 2 (229.467843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-275462" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 2 (229.308004ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-275462 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-275462 logs -n 25: (1.671169976s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-238338                              | cert-expiration-238338       | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-018891             | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-018891                                   | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-563652            | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275462        | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-318420 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | disable-driver-mounts-318420                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:24 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-018891                  | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-018891 --memory=2200                     | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:34 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-755535  | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC | 31 Jul 24 21:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-563652                 | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275462             | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-755535       | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC | 31 Jul 24 21:34 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:27:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:27:26.030260 1148013 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:27:26.030388 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030397 1148013 out.go:304] Setting ErrFile to fd 2...
	I0731 21:27:26.030401 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030608 1148013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:27:26.031249 1148013 out.go:298] Setting JSON to false
	I0731 21:27:26.032356 1148013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18597,"bootTime":1722442649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:27:26.032418 1148013 start.go:139] virtualization: kvm guest
	I0731 21:27:26.034938 1148013 out.go:177] * [default-k8s-diff-port-755535] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:27:26.036482 1148013 notify.go:220] Checking for updates...
	I0731 21:27:26.036489 1148013 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:27:26.038147 1148013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:27:26.039588 1148013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:27:26.040948 1148013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:27:26.042283 1148013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:27:26.043447 1148013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:27:26.045210 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:27:26.045675 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.045758 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.061309 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0731 21:27:26.061780 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.062491 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.062533 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.062921 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.063189 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.063482 1148013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:27:26.063794 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.063834 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.079162 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0731 21:27:26.079645 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.080157 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.080183 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.080542 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.080745 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.118664 1148013 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:27:26.120036 1148013 start.go:297] selected driver: kvm2
	I0731 21:27:26.120101 1148013 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.120220 1148013 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:27:26.120963 1148013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.121063 1148013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:27:26.137571 1148013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:27:26.137997 1148013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:27:26.138052 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:27:26.138065 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:27:26.138143 1148013 start.go:340] cluster config:
	{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.138260 1148013 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.140210 1148013 out.go:177] * Starting "default-k8s-diff-port-755535" primary control-plane node in "default-k8s-diff-port-755535" cluster
	I0731 21:27:26.141439 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:27:26.141487 1148013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:27:26.141498 1148013 cache.go:56] Caching tarball of preloaded images
	I0731 21:27:26.141586 1148013 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:27:26.141597 1148013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:27:26.141693 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:27:26.141896 1148013 start.go:360] acquireMachinesLock for default-k8s-diff-port-755535: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:27:27.008495 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:30.080584 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:36.160478 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:39.232498 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:45.312414 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:48.384471 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:54.464384 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:57.536420 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:03.616434 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:06.688387 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:12.768424 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:15.840395 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:21.920383 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:24.992412 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:31.072430 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:34.144440 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:37.147856 1147232 start.go:364] duration metric: took 3m32.571011548s to acquireMachinesLock for "embed-certs-563652"
	I0731 21:28:37.147925 1147232 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:37.147931 1147232 fix.go:54] fixHost starting: 
	I0731 21:28:37.148287 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:37.148321 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:37.164497 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0731 21:28:37.164970 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:37.165488 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:28:37.165514 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:37.165980 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:37.166236 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:37.166440 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:28:37.168379 1147232 fix.go:112] recreateIfNeeded on embed-certs-563652: state=Stopped err=<nil>
	I0731 21:28:37.168407 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	W0731 21:28:37.168605 1147232 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:37.170589 1147232 out.go:177] * Restarting existing kvm2 VM for "embed-certs-563652" ...
	I0731 21:28:37.171953 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Start
	I0731 21:28:37.172181 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring networks are active...
	I0731 21:28:37.173124 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network default is active
	I0731 21:28:37.173407 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network mk-embed-certs-563652 is active
	I0731 21:28:37.173963 1147232 main.go:141] libmachine: (embed-certs-563652) Getting domain xml...
	I0731 21:28:37.174662 1147232 main.go:141] libmachine: (embed-certs-563652) Creating domain...
	I0731 21:28:38.412401 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting to get IP...
	I0731 21:28:38.413198 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.413705 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.413848 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.413679 1148299 retry.go:31] will retry after 259.485128ms: waiting for machine to come up
	I0731 21:28:38.675408 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.675997 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.676020 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.675947 1148299 retry.go:31] will retry after 335.618163ms: waiting for machine to come up
	I0731 21:28:39.013788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.014375 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.014410 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.014338 1148299 retry.go:31] will retry after 367.833515ms: waiting for machine to come up
	I0731 21:28:39.383927 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.384304 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.384330 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.384282 1148299 retry.go:31] will retry after 399.641643ms: waiting for machine to come up
	I0731 21:28:37.145377 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:37.145426 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.145841 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:28:37.145876 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.146110 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:28:37.147660 1146656 machine.go:97] duration metric: took 4m34.558419201s to provisionDockerMachine
	I0731 21:28:37.147745 1146656 fix.go:56] duration metric: took 4m34.586940428s for fixHost
	I0731 21:28:37.147761 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 4m34.586994448s
	W0731 21:28:37.147782 1146656 start.go:714] error starting host: provision: host is not running
	W0731 21:28:37.147896 1146656 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 21:28:37.147905 1146656 start.go:729] Will try again in 5 seconds ...
	I0731 21:28:39.785994 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.786532 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.786564 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.786477 1148299 retry.go:31] will retry after 734.925372ms: waiting for machine to come up
	I0731 21:28:40.523580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:40.523946 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:40.523976 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:40.523897 1148299 retry.go:31] will retry after 588.684081ms: waiting for machine to come up
	I0731 21:28:41.113730 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:41.114237 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:41.114269 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:41.114163 1148299 retry.go:31] will retry after 937.611465ms: waiting for machine to come up
	I0731 21:28:42.053276 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:42.053607 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:42.053631 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:42.053567 1148299 retry.go:31] will retry after 1.025772158s: waiting for machine to come up
	I0731 21:28:43.081306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:43.081710 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:43.081739 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:43.081649 1148299 retry.go:31] will retry after 1.677045484s: waiting for machine to come up
	I0731 21:28:42.148804 1146656 start.go:360] acquireMachinesLock for no-preload-018891: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:28:44.761328 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:44.761956 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:44.761982 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:44.761903 1148299 retry.go:31] will retry after 2.317638211s: waiting for machine to come up
	I0731 21:28:47.081357 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:47.081798 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:47.081821 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:47.081742 1148299 retry.go:31] will retry after 2.614024076s: waiting for machine to come up
	I0731 21:28:49.697308 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:49.697764 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:49.697788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:49.697724 1148299 retry.go:31] will retry after 2.673090887s: waiting for machine to come up
	I0731 21:28:52.372166 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:52.372536 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:52.372567 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:52.372480 1148299 retry.go:31] will retry after 3.507450288s: waiting for machine to come up
	I0731 21:28:57.157052 1147424 start.go:364] duration metric: took 3m42.182815583s to acquireMachinesLock for "old-k8s-version-275462"
	I0731 21:28:57.157149 1147424 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:57.157159 1147424 fix.go:54] fixHost starting: 
	I0731 21:28:57.157580 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:57.157635 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:57.177971 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 21:28:57.178444 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:57.179070 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:28:57.179105 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:57.179414 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:57.179640 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:28:57.179803 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetState
	I0731 21:28:57.181518 1147424 fix.go:112] recreateIfNeeded on old-k8s-version-275462: state=Stopped err=<nil>
	I0731 21:28:57.181566 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	W0731 21:28:57.181776 1147424 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:57.184336 1147424 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275462" ...
	I0731 21:28:55.884290 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.884864 1147232 main.go:141] libmachine: (embed-certs-563652) Found IP for machine: 192.168.50.203
	I0731 21:28:55.884893 1147232 main.go:141] libmachine: (embed-certs-563652) Reserving static IP address...
	I0731 21:28:55.884911 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has current primary IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.885425 1147232 main.go:141] libmachine: (embed-certs-563652) Reserved static IP address: 192.168.50.203
	I0731 21:28:55.885445 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting for SSH to be available...
	I0731 21:28:55.885479 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.885500 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | skip adding static IP to network mk-embed-certs-563652 - found existing host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"}
	I0731 21:28:55.885515 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Getting to WaitForSSH function...
	I0731 21:28:55.887696 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888052 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.888109 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888279 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH client type: external
	I0731 21:28:55.888310 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa (-rw-------)
	I0731 21:28:55.888353 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:28:55.888371 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | About to run SSH command:
	I0731 21:28:55.888387 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | exit 0
	I0731 21:28:56.012306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | SSH cmd err, output: <nil>: 
	I0731 21:28:56.012807 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetConfigRaw
	I0731 21:28:56.013549 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.016243 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.016629 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016925 1147232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/config.json ...
	I0731 21:28:56.017152 1147232 machine.go:94] provisionDockerMachine start ...
	I0731 21:28:56.017173 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.017431 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.019693 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020075 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.020124 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020296 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.020489 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020606 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020705 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.020835 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.021131 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.021143 1147232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:28:56.120421 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:28:56.120455 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.120874 1147232 buildroot.go:166] provisioning hostname "embed-certs-563652"
	I0731 21:28:56.120911 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.121185 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.124050 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124509 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.124548 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124693 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.124936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125120 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125300 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.125456 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.125645 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.125660 1147232 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-563652 && echo "embed-certs-563652" | sudo tee /etc/hostname
	I0731 21:28:56.237674 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-563652
	
	I0731 21:28:56.237709 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.240783 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.241212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241460 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.241660 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.241850 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.242009 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.242230 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.242458 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.242479 1147232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-563652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-563652/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-563652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:28:56.353104 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:56.353138 1147232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:28:56.353165 1147232 buildroot.go:174] setting up certificates
	I0731 21:28:56.353180 1147232 provision.go:84] configureAuth start
	I0731 21:28:56.353193 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.353590 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.356346 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356736 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.356767 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356921 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.359016 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359319 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.359364 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359530 1147232 provision.go:143] copyHostCerts
	I0731 21:28:56.359595 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:28:56.359605 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:28:56.359674 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:28:56.359763 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:28:56.359772 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:28:56.359795 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:28:56.359858 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:28:56.359864 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:28:56.359886 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:28:56.359961 1147232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.embed-certs-563652 san=[127.0.0.1 192.168.50.203 embed-certs-563652 localhost minikube]
	I0731 21:28:56.517263 1147232 provision.go:177] copyRemoteCerts
	I0731 21:28:56.517324 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:28:56.517355 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.519965 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520292 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.520326 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520523 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.520745 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.520956 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.521090 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:56.602671 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:28:56.626882 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:28:56.651212 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:28:56.674469 1147232 provision.go:87] duration metric: took 321.274463ms to configureAuth
	I0731 21:28:56.674505 1147232 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:28:56.674734 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:28:56.674830 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.677835 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.678215 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678375 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.678563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678741 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678898 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.679075 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.679259 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.679275 1147232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:28:56.930106 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:28:56.930136 1147232 machine.go:97] duration metric: took 912.97079ms to provisionDockerMachine
	I0731 21:28:56.930148 1147232 start.go:293] postStartSetup for "embed-certs-563652" (driver="kvm2")
	I0731 21:28:56.930159 1147232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:28:56.930177 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.930534 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:28:56.930563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.933241 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933656 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.933689 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933795 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.934062 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.934228 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.934372 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.015059 1147232 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:28:57.019339 1147232 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:28:57.019376 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:28:57.019472 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:28:57.019581 1147232 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:28:57.019680 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:28:57.029381 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:28:57.052530 1147232 start.go:296] duration metric: took 122.364505ms for postStartSetup
	I0731 21:28:57.052583 1147232 fix.go:56] duration metric: took 19.904651181s for fixHost
	I0731 21:28:57.052612 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.055423 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.055802 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.055852 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.056142 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.056343 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056494 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056668 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.056844 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:57.057017 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:57.057028 1147232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:28:57.156776 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461337.115873615
	
	I0731 21:28:57.156816 1147232 fix.go:216] guest clock: 1722461337.115873615
	I0731 21:28:57.156847 1147232 fix.go:229] Guest: 2024-07-31 21:28:57.115873615 +0000 UTC Remote: 2024-07-31 21:28:57.05258776 +0000 UTC m=+232.627404404 (delta=63.285855ms)
	I0731 21:28:57.156883 1147232 fix.go:200] guest clock delta is within tolerance: 63.285855ms
	I0731 21:28:57.156901 1147232 start.go:83] releasing machines lock for "embed-certs-563652", held for 20.008989513s
	I0731 21:28:57.156936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.157244 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:57.159882 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160307 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.160334 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160545 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161086 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161266 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161349 1147232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:28:57.161394 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.161460 1147232 ssh_runner.go:195] Run: cat /version.json
	I0731 21:28:57.161481 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.164126 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164511 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.164552 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164583 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164719 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.164942 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165001 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.165022 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.165106 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165194 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.165277 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.165369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165536 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165692 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.261717 1147232 ssh_runner.go:195] Run: systemctl --version
	I0731 21:28:57.267459 1147232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:28:57.412757 1147232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:28:57.418248 1147232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:28:57.418317 1147232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:28:57.437752 1147232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:28:57.437786 1147232 start.go:495] detecting cgroup driver to use...
	I0731 21:28:57.437874 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:28:57.456832 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:28:57.472719 1147232 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:28:57.472803 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:28:57.486630 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:28:57.500635 1147232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:28:57.626291 1147232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:28:57.775374 1147232 docker.go:233] disabling docker service ...
	I0731 21:28:57.775563 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:28:57.789797 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:28:57.803545 1147232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:28:57.944871 1147232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:28:58.088067 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:28:58.112885 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:28:58.133234 1147232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:28:58.133301 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.144149 1147232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:28:58.144234 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.154684 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.165572 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.176638 1147232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:28:58.187948 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.198949 1147232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.219594 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.230888 1147232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:28:58.241112 1147232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:28:58.241175 1147232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:28:58.255158 1147232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:28:58.265191 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:28:58.401923 1147232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:28:58.534900 1147232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:28:58.534980 1147232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:28:58.539618 1147232 start.go:563] Will wait 60s for crictl version
	I0731 21:28:58.539700 1147232 ssh_runner.go:195] Run: which crictl
	I0731 21:28:58.543605 1147232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:28:58.578544 1147232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:28:58.578653 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.608074 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.638975 1147232 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:28:58.640454 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:58.643630 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644168 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:58.644204 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644497 1147232 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 21:28:58.648555 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:28:58.661131 1147232 kubeadm.go:883] updating cluster {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:28:58.661262 1147232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:28:58.661307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:28:58.696977 1147232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:28:58.697058 1147232 ssh_runner.go:195] Run: which lz4
	I0731 21:28:58.700913 1147232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:28:58.705097 1147232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:28:58.705135 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:28:57.185854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .Start
	I0731 21:28:57.186093 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring networks are active...
	I0731 21:28:57.186915 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network default is active
	I0731 21:28:57.187268 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network mk-old-k8s-version-275462 is active
	I0731 21:28:57.187627 1147424 main.go:141] libmachine: (old-k8s-version-275462) Getting domain xml...
	I0731 21:28:57.188447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:28:58.502711 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting to get IP...
	I0731 21:28:58.503791 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.504272 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.504341 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.504250 1148436 retry.go:31] will retry after 309.193175ms: waiting for machine to come up
	I0731 21:28:58.815172 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.815690 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.815745 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.815657 1148436 retry.go:31] will retry after 271.329404ms: waiting for machine to come up
	I0731 21:28:59.089281 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.089738 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.089778 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.089705 1148436 retry.go:31] will retry after 354.250517ms: waiting for machine to come up
	I0731 21:28:59.445390 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.445869 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.445895 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.445823 1148436 retry.go:31] will retry after 434.740787ms: waiting for machine to come up
	I0731 21:29:00.142120 1147232 crio.go:462] duration metric: took 1.441232682s to copy over tarball
	I0731 21:29:00.142222 1147232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:02.454101 1147232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.311834948s)
	I0731 21:29:02.454139 1147232 crio.go:469] duration metric: took 2.311975688s to extract the tarball
	I0731 21:29:02.454150 1147232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:02.493307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:02.541225 1147232 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:02.541257 1147232 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:02.541268 1147232 kubeadm.go:934] updating node { 192.168.50.203 8443 v1.30.3 crio true true} ...
	I0731 21:29:02.541448 1147232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-563652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:02.541548 1147232 ssh_runner.go:195] Run: crio config
	I0731 21:29:02.586951 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:02.586976 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:02.586989 1147232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:02.587016 1147232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.203 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-563652 NodeName:embed-certs-563652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:02.587188 1147232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-563652"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:02.587287 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:02.598944 1147232 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:02.599041 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:02.610271 1147232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0731 21:29:02.627952 1147232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:02.644727 1147232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0731 21:29:02.661985 1147232 ssh_runner.go:195] Run: grep 192.168.50.203	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:02.665903 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:02.678010 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:02.809768 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:02.826650 1147232 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652 for IP: 192.168.50.203
	I0731 21:29:02.826682 1147232 certs.go:194] generating shared ca certs ...
	I0731 21:29:02.826704 1147232 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:02.826923 1147232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:02.826988 1147232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:02.827005 1147232 certs.go:256] generating profile certs ...
	I0731 21:29:02.827126 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/client.key
	I0731 21:29:02.827208 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key.0963b177
	I0731 21:29:02.827279 1147232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key
	I0731 21:29:02.827458 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:02.827515 1147232 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:02.827533 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:02.827563 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:02.827598 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:02.827630 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:02.827690 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:02.828735 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:02.862923 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:02.907648 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:02.950647 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:02.978032 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 21:29:03.007119 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:29:03.031483 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:03.055190 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:03.079296 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:03.102817 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:03.126115 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:03.149887 1147232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:03.167213 1147232 ssh_runner.go:195] Run: openssl version
	I0731 21:29:03.172827 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:03.183821 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188216 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188290 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.193896 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:03.204706 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:03.215687 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220061 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220148 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.226469 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:03.237668 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:03.248629 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.252962 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.253032 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.258590 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:03.269656 1147232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:03.274277 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:03.280438 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:03.286378 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:03.292717 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:03.298776 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:03.305022 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:03.311507 1147232 kubeadm.go:392] StartCluster: {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:03.311608 1147232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:03.311676 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.349359 1147232 cri.go:89] found id: ""
	I0731 21:29:03.349457 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:03.359993 1147232 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:03.360015 1147232 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:03.360058 1147232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:03.371322 1147232 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:03.372350 1147232 kubeconfig.go:125] found "embed-certs-563652" server: "https://192.168.50.203:8443"
	I0731 21:29:03.374391 1147232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:03.386008 1147232 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.203
	I0731 21:29:03.386053 1147232 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:03.386069 1147232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:03.386141 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.428902 1147232 cri.go:89] found id: ""
	I0731 21:29:03.429001 1147232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:03.445950 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:03.455917 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:03.455954 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:03.456007 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:03.465688 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:03.465757 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:03.475699 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:03.485103 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:03.485179 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:03.495141 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.504430 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:03.504532 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.514523 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:03.524199 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:03.524280 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:03.533924 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:03.546105 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:03.656770 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:28:59.882326 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.882926 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.882959 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.882880 1148436 retry.go:31] will retry after 563.345278ms: waiting for machine to come up
	I0731 21:29:00.447702 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:00.448213 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:00.448245 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:00.448155 1148436 retry.go:31] will retry after 605.062991ms: waiting for machine to come up
	I0731 21:29:01.055120 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.055541 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.055564 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.055484 1148436 retry.go:31] will retry after 781.785142ms: waiting for machine to come up
	I0731 21:29:01.838536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.839123 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.839148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.839075 1148436 retry.go:31] will retry after 1.037287171s: waiting for machine to come up
	I0731 21:29:02.878421 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:02.878828 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:02.878860 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:02.878794 1148436 retry.go:31] will retry after 1.796829213s: waiting for machine to come up
	I0731 21:29:04.677338 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:04.677928 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:04.677963 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:04.677848 1148436 retry.go:31] will retry after 2.083632912s: waiting for machine to come up
	I0731 21:29:04.982138 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.325308339s)
	I0731 21:29:04.982177 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.196591 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.261920 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.343027 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:05.343137 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:05.844024 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.344246 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.360837 1147232 api_server.go:72] duration metric: took 1.017810929s to wait for apiserver process to appear ...
	I0731 21:29:06.360880 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:06.360916 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:06.361563 1147232 api_server.go:269] stopped: https://192.168.50.203:8443/healthz: Get "https://192.168.50.203:8443/healthz": dial tcp 192.168.50.203:8443: connect: connection refused
	I0731 21:29:06.861091 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.297633 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.297674 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.297691 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.335524 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.335568 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.361820 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.374624 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.374671 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:06.764436 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:06.764979 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:06.765012 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:06.764918 1148436 retry.go:31] will retry after 2.092811182s: waiting for machine to come up
	I0731 21:29:08.860056 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:08.860536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:08.860571 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:08.860498 1148436 retry.go:31] will retry after 2.731015709s: waiting for machine to come up
	I0731 21:29:09.861443 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.865941 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.865978 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.361710 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.365984 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:10.366014 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.861702 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.866015 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:29:10.872799 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:10.872831 1147232 api_server.go:131] duration metric: took 4.511944174s to wait for apiserver health ...
	I0731 21:29:10.872842 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:10.872848 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:10.874719 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:10.876229 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:10.886256 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:10.903893 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:10.913974 1147232 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:10.914021 1147232 system_pods.go:61] "coredns-7db6d8ff4d-kscsg" [260d2d5f-fd44-4a0a-813b-fab424728e55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:10.914031 1147232 system_pods.go:61] "etcd-embed-certs-563652" [e278abd0-801d-4156-bcc4-8f0d35a34b2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:10.914045 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [1398c865-6871-45c2-ad93-45b629d1d3c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:10.914055 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [0fbefc31-9024-41cb-b56a-944add33a901] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:10.914066 1147232 system_pods.go:61] "kube-proxy-m4www" [cb2d9b36-d71f-4986-9fb1-547e76fd2e77] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:10.914076 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [15887051-7657-4bf6-a9ca-3d834d8eb4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:10.914089 1147232 system_pods.go:61] "metrics-server-569cc877fc-6jkw9" [eb41d2c6-c267-486d-83eb-25e5578b1e6e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:10.914100 1147232 system_pods.go:61] "storage-provisioner" [5fc70da7-6dac-4e44-865c-495fd5fec485] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:10.914112 1147232 system_pods.go:74] duration metric: took 10.188078ms to wait for pod list to return data ...
	I0731 21:29:10.914125 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:10.917224 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:10.917258 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:10.917272 1147232 node_conditions.go:105] duration metric: took 3.140281ms to run NodePressure ...
	I0731 21:29:10.917294 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:11.176463 1147232 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180506 1147232 kubeadm.go:739] kubelet initialised
	I0731 21:29:11.180529 1147232 kubeadm.go:740] duration metric: took 4.03724ms waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180540 1147232 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:11.185366 1147232 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:13.197693 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:11.594836 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:11.595339 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:11.595374 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:11.595293 1148436 retry.go:31] will retry after 4.520307648s: waiting for machine to come up
	I0731 21:29:17.633145 1148013 start.go:364] duration metric: took 1m51.491197772s to acquireMachinesLock for "default-k8s-diff-port-755535"
	I0731 21:29:17.633242 1148013 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:17.633255 1148013 fix.go:54] fixHost starting: 
	I0731 21:29:17.633764 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:17.633823 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:17.654593 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0731 21:29:17.655124 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:17.655734 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:17.655770 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:17.656109 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:17.656359 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:17.656530 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:17.658542 1148013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-755535: state=Stopped err=<nil>
	I0731 21:29:17.658585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	W0731 21:29:17.658784 1148013 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:17.660580 1148013 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-755535" ...
	I0731 21:29:16.120431 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120937 1147424 main.go:141] libmachine: (old-k8s-version-275462) Found IP for machine: 192.168.72.107
	I0731 21:29:16.120961 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has current primary IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120968 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserving static IP address...
	I0731 21:29:16.121466 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.121508 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserved static IP address: 192.168.72.107
	I0731 21:29:16.121528 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | skip adding static IP to network mk-old-k8s-version-275462 - found existing host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"}
	I0731 21:29:16.121561 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Getting to WaitForSSH function...
	I0731 21:29:16.121599 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting for SSH to be available...
	I0731 21:29:16.123460 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123825 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.123849 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123954 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH client type: external
	I0731 21:29:16.123988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa (-rw-------)
	I0731 21:29:16.124019 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:16.124034 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | About to run SSH command:
	I0731 21:29:16.124049 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | exit 0
	I0731 21:29:16.244331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:16.244741 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:29:16.245387 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.248072 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248502 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.248529 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248857 1147424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:29:16.249132 1147424 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:16.249162 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:16.249412 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.252283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252657 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.252687 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.253096 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253286 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253433 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.253606 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.253875 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.253895 1147424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:16.356702 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:16.356743 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357088 1147424 buildroot.go:166] provisioning hostname "old-k8s-version-275462"
	I0731 21:29:16.357116 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357303 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.361044 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361504 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.361540 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361801 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.362037 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362252 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362430 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.362618 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.362866 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.362884 1147424 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275462 && echo "old-k8s-version-275462" | sudo tee /etc/hostname
	I0731 21:29:16.478590 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275462
	
	I0731 21:29:16.478635 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.481767 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.482184 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482467 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.482716 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.482888 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.483083 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.483323 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.483529 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.483554 1147424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275462/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:16.597465 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:16.597515 1147424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:16.597549 1147424 buildroot.go:174] setting up certificates
	I0731 21:29:16.597563 1147424 provision.go:84] configureAuth start
	I0731 21:29:16.597578 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.597901 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.600943 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601347 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.601388 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601582 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.604296 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604757 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.604787 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604950 1147424 provision.go:143] copyHostCerts
	I0731 21:29:16.605019 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:16.605037 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:16.605108 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:16.605235 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:16.605249 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:16.605285 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:16.605370 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:16.605381 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:16.605407 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:16.605474 1147424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275462 san=[127.0.0.1 192.168.72.107 localhost minikube old-k8s-version-275462]
	I0731 21:29:16.959571 1147424 provision.go:177] copyRemoteCerts
	I0731 21:29:16.959637 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:16.959671 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.962543 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.962955 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.962988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.963253 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.963483 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.963690 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.963885 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.047050 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:17.072833 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 21:29:17.099214 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:17.125846 1147424 provision.go:87] duration metric: took 528.260173ms to configureAuth
	I0731 21:29:17.125892 1147424 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:17.126109 1147424 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:29:17.126194 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.129283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129568 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.129602 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129926 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.130232 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130458 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130601 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.130820 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.131002 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.131016 1147424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:17.395537 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:17.395569 1147424 machine.go:97] duration metric: took 1.146418308s to provisionDockerMachine
	I0731 21:29:17.395581 1147424 start.go:293] postStartSetup for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:29:17.395598 1147424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:17.395639 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.395987 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:17.396024 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.398916 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399233 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.399264 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.399674 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.399854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.400026 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.483331 1147424 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:17.487820 1147424 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:17.487856 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:17.487925 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:17.488012 1147424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:17.488186 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:17.499484 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:17.525699 1147424 start.go:296] duration metric: took 130.099417ms for postStartSetup
	I0731 21:29:17.525756 1147424 fix.go:56] duration metric: took 20.368597161s for fixHost
	I0731 21:29:17.525785 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.529040 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529525 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.529570 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.530095 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530310 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530481 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.530704 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.530879 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.530890 1147424 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:17.632991 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461357.608223429
	
	I0731 21:29:17.633011 1147424 fix.go:216] guest clock: 1722461357.608223429
	I0731 21:29:17.633018 1147424 fix.go:229] Guest: 2024-07-31 21:29:17.608223429 +0000 UTC Remote: 2024-07-31 21:29:17.525761122 +0000 UTC m=+242.704537445 (delta=82.462307ms)
	I0731 21:29:17.633040 1147424 fix.go:200] guest clock delta is within tolerance: 82.462307ms
	I0731 21:29:17.633045 1147424 start.go:83] releasing machines lock for "old-k8s-version-275462", held for 20.475925282s
	I0731 21:29:17.633069 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.633360 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:17.636188 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636565 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.636598 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636792 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637346 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637569 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637674 1147424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:17.637721 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.637831 1147424 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:17.637861 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.640574 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640772 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640966 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.640996 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641174 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641297 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.641331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641371 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641511 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641564 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641680 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641846 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641886 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.642184 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.716822 1147424 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:17.741404 1147424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:17.892700 1147424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:17.899143 1147424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:17.899252 1147424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:17.915997 1147424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:17.916032 1147424 start.go:495] detecting cgroup driver to use...
	I0731 21:29:17.916133 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:17.933847 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:17.948471 1147424 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:17.948565 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:17.963294 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:17.978417 1147424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:18.100521 1147424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:18.243022 1147424 docker.go:233] disabling docker service ...
	I0731 21:29:18.243104 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:18.258762 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:18.272012 1147424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:18.421137 1147424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:18.564600 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:18.581019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:18.601426 1147424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:29:18.601504 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.617312 1147424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:18.617400 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.631697 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.642487 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.654548 1147424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:18.666338 1147424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:18.676326 1147424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:18.676406 1147424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:18.690225 1147424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:18.702315 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:18.836795 1147424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:18.977840 1147424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:18.977930 1147424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:18.984979 1147424 start.go:563] Will wait 60s for crictl version
	I0731 21:29:18.985059 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:18.989654 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:19.033602 1147424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:19.033701 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.061583 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.093228 1147424 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:29:15.692077 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:18.191423 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:19.094804 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:19.098122 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.098620 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:19.098648 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.099016 1147424 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:19.103372 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:19.117035 1147424 kubeadm.go:883] updating cluster {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:19.117205 1147424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:29:19.117275 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:19.163252 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:19.163343 1147424 ssh_runner.go:195] Run: which lz4
	I0731 21:29:19.168173 1147424 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:19.172513 1147424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:19.172576 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:29:17.662009 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Start
	I0731 21:29:17.662245 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring networks are active...
	I0731 21:29:17.663121 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network default is active
	I0731 21:29:17.663583 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network mk-default-k8s-diff-port-755535 is active
	I0731 21:29:17.664059 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Getting domain xml...
	I0731 21:29:17.664837 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Creating domain...
	I0731 21:29:18.989801 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting to get IP...
	I0731 21:29:18.990936 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991376 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991428 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:18.991344 1148583 retry.go:31] will retry after 247.770384ms: waiting for machine to come up
	I0731 21:29:19.241063 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241658 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.241549 1148583 retry.go:31] will retry after 287.808437ms: waiting for machine to come up
	I0731 21:29:19.531237 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531849 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531875 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.531777 1148583 retry.go:31] will retry after 317.584035ms: waiting for machine to come up
	I0731 21:29:19.851691 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852167 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852202 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.852128 1148583 retry.go:31] will retry after 555.57435ms: waiting for machine to come up
	I0731 21:29:20.409812 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410356 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410392 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:20.410280 1148583 retry.go:31] will retry after 721.969177ms: waiting for machine to come up
	I0731 21:29:20.195383 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:20.703603 1147232 pod_ready.go:92] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.703634 1147232 pod_ready.go:81] duration metric: took 9.51823955s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.703649 1147232 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724000 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.724036 1147232 pod_ready.go:81] duration metric: took 20.374673ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724051 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732302 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.732326 1147232 pod_ready.go:81] duration metric: took 8.267565ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732340 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747581 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.747609 1147232 pod_ready.go:81] duration metric: took 2.015261928s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747619 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753322 1147232 pod_ready.go:92] pod "kube-proxy-m4www" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.753348 1147232 pod_ready.go:81] duration metric: took 5.72252ms for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753359 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758310 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.758335 1147232 pod_ready.go:81] duration metric: took 4.970124ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758346 1147232 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.731858 1147424 crio.go:462] duration metric: took 1.563734165s to copy over tarball
	I0731 21:29:20.732033 1147424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:23.813579 1147424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081445019s)
	I0731 21:29:23.813629 1147424 crio.go:469] duration metric: took 3.081657576s to extract the tarball
	I0731 21:29:23.813640 1147424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:23.855937 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:23.892640 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:23.892676 1147424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:23.892772 1147424 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.892797 1147424 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.892852 1147424 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.892776 1147424 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.893142 1147424 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.893240 1147424 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:29:23.893343 1147424 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.893348 1147424 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.894880 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.895111 1147424 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.894968 1147424 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.895194 1147424 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.895489 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.895587 1147424 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:29:24.036855 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.039761 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.042658 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.045088 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.045098 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.048688 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.088535 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:29:24.218808 1147424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:29:24.218845 1147424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:29:24.218881 1147424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:29:24.218918 1147424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.218930 1147424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:29:24.218936 1147424 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:29:24.218943 1147424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.218965 1147424 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.218978 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.219058 1147424 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:29:24.219078 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219079 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219084 1147424 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.219135 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238540 1147424 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:29:24.238602 1147424 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:29:24.238653 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238678 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.238697 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.238736 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.238794 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.238802 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.238851 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.366795 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:29:24.371307 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:29:24.371394 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:29:24.371436 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:29:24.371516 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:29:24.380026 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:29:24.380043 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:29:24.412112 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:29:24.523420 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:24.671943 1147424 cache_images.go:92] duration metric: took 779.240281ms to LoadCachedImages
	W0731 21:29:24.672078 1147424 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0731 21:29:24.672114 1147424 kubeadm.go:934] updating node { 192.168.72.107 8443 v1.20.0 crio true true} ...
	I0731 21:29:24.672267 1147424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:24.672897 1147424 ssh_runner.go:195] Run: crio config
	I0731 21:29:24.722662 1147424 cni.go:84] Creating CNI manager for ""
	I0731 21:29:24.722686 1147424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:24.722696 1147424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:24.722717 1147424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275462 NodeName:old-k8s-version-275462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:29:24.722892 1147424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275462"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:24.722962 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:29:24.733178 1147424 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:24.733273 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:24.743515 1147424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 21:29:24.760826 1147424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:24.779805 1147424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:29:24.798560 1147424 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:24.802406 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:24.815015 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:21.134251 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134731 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134764 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:21.134687 1148583 retry.go:31] will retry after 934.566416ms: waiting for machine to come up
	I0731 21:29:22.071038 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071631 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.071554 1148583 retry.go:31] will retry after 884.282326ms: waiting for machine to come up
	I0731 21:29:22.957241 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957617 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.957599 1148583 retry.go:31] will retry after 1.014946816s: waiting for machine to come up
	I0731 21:29:23.974435 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974845 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974883 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:23.974807 1148583 retry.go:31] will retry after 1.519800108s: waiting for machine to come up
	I0731 21:29:25.496770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497303 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:25.497249 1148583 retry.go:31] will retry after 1.739198883s: waiting for machine to come up
	I0731 21:29:24.767123 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:27.265952 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:29.266044 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:24.937628 1147424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:24.956917 1147424 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462 for IP: 192.168.72.107
	I0731 21:29:24.956949 1147424 certs.go:194] generating shared ca certs ...
	I0731 21:29:24.956972 1147424 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:24.957180 1147424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:24.957243 1147424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:24.957258 1147424 certs.go:256] generating profile certs ...
	I0731 21:29:24.957385 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key
	I0731 21:29:24.957468 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421
	I0731 21:29:24.957520 1147424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key
	I0731 21:29:24.957676 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:24.957719 1147424 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:24.957734 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:24.957770 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:24.957805 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:24.957837 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:24.957898 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:24.958772 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:24.998159 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:25.057520 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:25.098374 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:25.140601 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:29:25.187540 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:25.213821 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:25.240997 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:25.266970 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:25.292340 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:25.318838 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:25.344071 1147424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:25.361756 1147424 ssh_runner.go:195] Run: openssl version
	I0731 21:29:25.368009 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:25.379741 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.384975 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.385052 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.390894 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:25.403007 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:25.415067 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422223 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422310 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.429842 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:25.440874 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:25.451684 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456190 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456259 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.462311 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:25.474253 1147424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:25.479088 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:25.485188 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:25.491404 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:25.498223 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:25.504935 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:25.511202 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:25.517628 1147424 kubeadm.go:392] StartCluster: {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:25.517767 1147424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:25.517832 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.555145 1147424 cri.go:89] found id: ""
	I0731 21:29:25.555227 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:25.565732 1147424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:25.565758 1147424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:25.565821 1147424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:25.575700 1147424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:25.576730 1147424 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:25.577437 1147424 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-275462" cluster setting kubeconfig missing "old-k8s-version-275462" context setting]
	I0731 21:29:25.578357 1147424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:25.626975 1147424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:25.637707 1147424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.107
	I0731 21:29:25.637758 1147424 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:25.637773 1147424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:25.637826 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.674153 1147424 cri.go:89] found id: ""
	I0731 21:29:25.674240 1147424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:25.692354 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:25.703047 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:25.703081 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:25.703140 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:25.712766 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:25.712884 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:25.723121 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:25.732767 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:25.732846 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:25.743055 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.752622 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:25.752699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.763763 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:25.773620 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:25.773699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:25.784175 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:25.794182 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:25.908515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.676104 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.891081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.024837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.100397 1147424 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:27.100499 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.600582 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.101391 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.601068 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.101502 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.600838 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.239418 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239872 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:27.239806 1148583 retry.go:31] will retry after 1.907805681s: waiting for machine to come up
	I0731 21:29:29.149605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150022 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150049 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:29.149966 1148583 retry.go:31] will retry after 3.584697795s: waiting for machine to come up
	I0731 21:29:31.765270 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:34.264994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:30.101071 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:30.601377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.100907 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.600736 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.100741 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.601406 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.100616 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.601476 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.101619 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.601270 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.736055 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736539 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:32.736495 1148583 retry.go:31] will retry after 4.026783834s: waiting for machine to come up
	I0731 21:29:38.016998 1146656 start.go:364] duration metric: took 55.868098686s to acquireMachinesLock for "no-preload-018891"
	I0731 21:29:38.017060 1146656 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:38.017069 1146656 fix.go:54] fixHost starting: 
	I0731 21:29:38.017509 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:38.017552 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:38.036034 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0731 21:29:38.036681 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:38.037291 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:29:38.037319 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:38.037687 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:38.037920 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:38.038078 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:29:38.040079 1146656 fix.go:112] recreateIfNeeded on no-preload-018891: state=Stopped err=<nil>
	I0731 21:29:38.040133 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	W0731 21:29:38.040317 1146656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:38.042575 1146656 out.go:177] * Restarting existing kvm2 VM for "no-preload-018891" ...
	I0731 21:29:36.766344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:39.265931 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:36.767067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767688 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has current primary IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767744 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Found IP for machine: 192.168.39.145
	I0731 21:29:36.767774 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserving static IP address...
	I0731 21:29:36.768193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.768234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | skip adding static IP to network mk-default-k8s-diff-port-755535 - found existing host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"}
	I0731 21:29:36.768256 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserved static IP address: 192.168.39.145
	I0731 21:29:36.768277 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for SSH to be available...
	I0731 21:29:36.768292 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Getting to WaitForSSH function...
	I0731 21:29:36.770423 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.770710 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770880 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH client type: external
	I0731 21:29:36.770909 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa (-rw-------)
	I0731 21:29:36.770966 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:36.770989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | About to run SSH command:
	I0731 21:29:36.771004 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | exit 0
	I0731 21:29:36.892321 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:36.892633 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetConfigRaw
	I0731 21:29:36.893372 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:36.896249 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896647 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.896682 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896983 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:29:36.897231 1148013 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:36.897253 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:36.897507 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:36.900381 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900794 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.900832 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900940 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:36.901137 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901403 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:36.901591 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:36.901809 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:36.901823 1148013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:37.004424 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:37.004459 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004749 1148013 buildroot.go:166] provisioning hostname "default-k8s-diff-port-755535"
	I0731 21:29:37.004770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.007987 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008391 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.008439 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.008802 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.008981 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.009190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.009374 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.009588 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.009602 1148013 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-755535 && echo "default-k8s-diff-port-755535" | sudo tee /etc/hostname
	I0731 21:29:37.127160 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-755535
	
	I0731 21:29:37.127190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.130282 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130701 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.130737 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130924 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.131178 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131389 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131537 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.131778 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.132017 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.132037 1148013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-755535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-755535/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-755535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:37.245157 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:37.245201 1148013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:37.245255 1148013 buildroot.go:174] setting up certificates
	I0731 21:29:37.245268 1148013 provision.go:84] configureAuth start
	I0731 21:29:37.245283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.245628 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:37.248611 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.248910 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.248944 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.249109 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.251332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251698 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.251727 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251911 1148013 provision.go:143] copyHostCerts
	I0731 21:29:37.251973 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:37.251983 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:37.252036 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:37.252164 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:37.252173 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:37.252196 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:37.252258 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:37.252265 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:37.252283 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:37.252334 1148013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-755535 san=[127.0.0.1 192.168.39.145 default-k8s-diff-port-755535 localhost minikube]
	I0731 21:29:37.356985 1148013 provision.go:177] copyRemoteCerts
	I0731 21:29:37.357046 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:37.357077 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.359635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.359985 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.360014 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.360217 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.360421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.360670 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.360815 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.442709 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:37.467795 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 21:29:37.492389 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:29:37.515837 1148013 provision.go:87] duration metric: took 270.547831ms to configureAuth
	I0731 21:29:37.515882 1148013 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:37.516070 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:37.516200 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.519062 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519432 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.519469 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519695 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.519920 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520141 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520323 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.520481 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.520701 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.520726 1148013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:37.780006 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:37.780033 1148013 machine.go:97] duration metric: took 882.786941ms to provisionDockerMachine
	I0731 21:29:37.780047 1148013 start.go:293] postStartSetup for "default-k8s-diff-port-755535" (driver="kvm2")
	I0731 21:29:37.780059 1148013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:37.780081 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:37.780459 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:37.780493 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.783495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.783853 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.783886 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.784068 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.784322 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.784531 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.784714 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.866990 1148013 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:37.871294 1148013 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:37.871329 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:37.871408 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:37.871483 1148013 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:37.871584 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:37.881107 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:37.906964 1148013 start.go:296] duration metric: took 126.897843ms for postStartSetup
	I0731 21:29:37.907016 1148013 fix.go:56] duration metric: took 20.273760895s for fixHost
	I0731 21:29:37.907045 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.910120 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910452 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.910495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910747 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.910965 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911119 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911255 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.911448 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.911690 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.911705 1148013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:38.016788 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461377.990571620
	
	I0731 21:29:38.016818 1148013 fix.go:216] guest clock: 1722461377.990571620
	I0731 21:29:38.016830 1148013 fix.go:229] Guest: 2024-07-31 21:29:37.99057162 +0000 UTC Remote: 2024-07-31 21:29:37.907020915 +0000 UTC m=+131.913986687 (delta=83.550705ms)
	I0731 21:29:38.016876 1148013 fix.go:200] guest clock delta is within tolerance: 83.550705ms
	I0731 21:29:38.016883 1148013 start.go:83] releasing machines lock for "default-k8s-diff-port-755535", held for 20.383695886s
	I0731 21:29:38.016916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.017234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:38.019995 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020405 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.020436 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020641 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021180 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021387 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021485 1148013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:38.021536 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.021665 1148013 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:38.021693 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.024445 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024777 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024913 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.024946 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025214 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025258 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.025291 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025461 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025626 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.025640 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025915 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025907 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.026067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.026237 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.129588 1148013 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:38.135557 1148013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:38.276230 1148013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:38.281894 1148013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:38.281977 1148013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:38.298709 1148013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:38.298742 1148013 start.go:495] detecting cgroup driver to use...
	I0731 21:29:38.298815 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:38.316212 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:38.331845 1148013 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:38.331925 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:38.350284 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:38.365411 1148013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:38.502379 1148013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:38.659435 1148013 docker.go:233] disabling docker service ...
	I0731 21:29:38.659544 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:38.676451 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:38.692936 1148013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:38.843766 1148013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:38.974723 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:38.989514 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:39.009753 1148013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:29:39.009822 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.020785 1148013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:39.020857 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.031679 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.047024 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.061692 1148013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:39.072901 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.084049 1148013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.101694 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.118920 1148013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:39.128796 1148013 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:39.128869 1148013 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:39.143329 1148013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:39.153376 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:39.278414 1148013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:39.427377 1148013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:39.427493 1148013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:39.432178 1148013 start.go:563] Will wait 60s for crictl version
	I0731 21:29:39.432262 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:29:39.435949 1148013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:39.470366 1148013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:39.470494 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.498247 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.531071 1148013 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:29:35.101055 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:35.600782 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.101344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.600794 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.101402 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.601198 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.100947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.601332 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.101351 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.601319 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.532416 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:39.535677 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536015 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:39.536046 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536341 1148013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:39.540305 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:39.553333 1148013 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:39.553464 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:29:39.553514 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:39.592137 1148013 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:29:39.592216 1148013 ssh_runner.go:195] Run: which lz4
	I0731 21:29:39.596215 1148013 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:39.600203 1148013 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:39.600244 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:29:41.004825 1148013 crio.go:462] duration metric: took 1.408653613s to copy over tarball
	I0731 21:29:41.004930 1148013 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:38.043667 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Start
	I0731 21:29:38.043892 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring networks are active...
	I0731 21:29:38.044764 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network default is active
	I0731 21:29:38.045177 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network mk-no-preload-018891 is active
	I0731 21:29:38.045594 1146656 main.go:141] libmachine: (no-preload-018891) Getting domain xml...
	I0731 21:29:38.046459 1146656 main.go:141] libmachine: (no-preload-018891) Creating domain...
	I0731 21:29:39.353762 1146656 main.go:141] libmachine: (no-preload-018891) Waiting to get IP...
	I0731 21:29:39.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.355279 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.355383 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.355255 1148782 retry.go:31] will retry after 234.245005ms: waiting for machine to come up
	I0731 21:29:39.590814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.591332 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.591358 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.591270 1148782 retry.go:31] will retry after 362.949809ms: waiting for machine to come up
	I0731 21:29:39.956112 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.956694 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.956721 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.956639 1148782 retry.go:31] will retry after 469.324659ms: waiting for machine to come up
	I0731 21:29:40.427518 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.427997 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.428027 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.427953 1148782 retry.go:31] will retry after 463.172567ms: waiting for machine to come up
	I0731 21:29:40.893318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.893864 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.893890 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.893824 1148782 retry.go:31] will retry after 599.834904ms: waiting for machine to come up
	I0731 21:29:41.495844 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:41.496342 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:41.496372 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:41.496291 1148782 retry.go:31] will retry after 856.360903ms: waiting for machine to come up
	I0731 21:29:41.266267 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:43.267009 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:40.101530 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:40.601303 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.100720 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.600723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.100890 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.100765 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.101217 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.356436 1148013 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.351465263s)
	I0731 21:29:43.356470 1148013 crio.go:469] duration metric: took 2.351606996s to extract the tarball
	I0731 21:29:43.356479 1148013 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:43.397583 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:43.443757 1148013 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:43.443784 1148013 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:43.443793 1148013 kubeadm.go:934] updating node { 192.168.39.145 8444 v1.30.3 crio true true} ...
	I0731 21:29:43.443954 1148013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-755535 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:43.444026 1148013 ssh_runner.go:195] Run: crio config
	I0731 21:29:43.494935 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:43.494959 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:43.494973 1148013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:43.495006 1148013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-755535 NodeName:default-k8s-diff-port-755535 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:43.495210 1148013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-755535"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:43.495303 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:43.505057 1148013 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:43.505176 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:43.514741 1148013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 21:29:43.534865 1148013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:43.554763 1148013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 21:29:43.572433 1148013 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:43.577403 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:43.592858 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:43.737530 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:43.754632 1148013 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535 for IP: 192.168.39.145
	I0731 21:29:43.754662 1148013 certs.go:194] generating shared ca certs ...
	I0731 21:29:43.754686 1148013 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:43.754900 1148013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:43.754960 1148013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:43.754976 1148013 certs.go:256] generating profile certs ...
	I0731 21:29:43.755093 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/client.key
	I0731 21:29:43.755177 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key.22420a8f
	I0731 21:29:43.755227 1148013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key
	I0731 21:29:43.755381 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:43.755424 1148013 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:43.755434 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:43.755455 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:43.755480 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:43.755500 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:43.755539 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:43.756235 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:43.800725 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:43.835648 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:43.880032 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:43.915459 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 21:29:43.943694 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:43.968578 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:43.993192 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:29:44.017364 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:44.041303 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:44.065792 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:44.089991 1148013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:44.107888 1148013 ssh_runner.go:195] Run: openssl version
	I0731 21:29:44.113758 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:44.125576 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130648 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130727 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.137311 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:44.149135 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:44.160439 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165263 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165329 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.171250 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:44.182798 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:44.194037 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198577 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198658 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.204406 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:44.215573 1148013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:44.221587 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:44.229391 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:44.237371 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:44.244379 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:44.250414 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:44.256557 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:44.262804 1148013 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:44.262928 1148013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:44.262993 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.298720 1148013 cri.go:89] found id: ""
	I0731 21:29:44.298826 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:44.310173 1148013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:44.310199 1148013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:44.310258 1148013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:44.321273 1148013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:44.322769 1148013 kubeconfig.go:125] found "default-k8s-diff-port-755535" server: "https://192.168.39.145:8444"
	I0731 21:29:44.325832 1148013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:44.336366 1148013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.145
	I0731 21:29:44.336407 1148013 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:44.336427 1148013 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:44.336498 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.383500 1148013 cri.go:89] found id: ""
	I0731 21:29:44.383591 1148013 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:44.399444 1148013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:44.410687 1148013 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:44.410711 1148013 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:44.410769 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 21:29:44.420845 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:44.420925 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:44.430476 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 21:29:44.440198 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:44.440277 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:44.450195 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.459883 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:44.459966 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.470649 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 21:29:44.480689 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:44.480764 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:44.490628 1148013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:44.501343 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:44.642878 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.555233 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.766976 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.832896 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.907410 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:45.907508 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.354282 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:42.354765 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:42.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:42.354694 1148782 retry.go:31] will retry after 1.044468751s: waiting for machine to come up
	I0731 21:29:43.400835 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:43.401345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:43.401402 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:43.401318 1148782 retry.go:31] will retry after 935.157631ms: waiting for machine to come up
	I0731 21:29:44.337853 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:44.338472 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:44.338505 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:44.338397 1148782 retry.go:31] will retry after 1.530891122s: waiting for machine to come up
	I0731 21:29:45.871035 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:45.871693 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:45.871734 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:45.871617 1148782 retry.go:31] will retry after 1.996010352s: waiting for machine to come up
	I0731 21:29:45.765589 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:47.765743 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:45.100963 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:45.601355 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.101354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.601416 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.100953 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.601551 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.100775 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.601528 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.101362 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.601101 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.407820 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.907790 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.924949 1148013 api_server.go:72] duration metric: took 1.017537991s to wait for apiserver process to appear ...
	I0731 21:29:46.924989 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:46.925016 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:49.933387 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:49.933431 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:49.933448 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.002123 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.002156 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.425320 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.430430 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.430465 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.926039 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.931251 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.931286 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:51.425157 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:51.430486 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:29:51.437067 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:51.437115 1148013 api_server.go:131] duration metric: took 4.512116778s to wait for apiserver health ...
	I0731 21:29:51.437131 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:51.437142 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:51.438770 1148013 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:47.869470 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:47.869928 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:47.869960 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:47.869867 1148782 retry.go:31] will retry after 1.758316686s: waiting for machine to come up
	I0731 21:29:49.630515 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:49.631000 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:49.631036 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:49.630936 1148782 retry.go:31] will retry after 2.39654611s: waiting for machine to come up
	I0731 21:29:51.440057 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:51.460432 1148013 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:51.479629 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:51.491000 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:51.491059 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:51.491076 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:51.491087 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:51.491110 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:51.491124 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:51.491139 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:51.491150 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:51.491222 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:51.491236 1148013 system_pods.go:74] duration metric: took 11.579003ms to wait for pod list to return data ...
	I0731 21:29:51.491252 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:51.495163 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:51.495206 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:51.495239 1148013 node_conditions.go:105] duration metric: took 3.977024ms to run NodePressure ...
	I0731 21:29:51.495263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:51.762752 1148013 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768504 1148013 kubeadm.go:739] kubelet initialised
	I0731 21:29:51.768541 1148013 kubeadm.go:740] duration metric: took 5.756089ms waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768554 1148013 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:51.776242 1148013 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.783488 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783533 1148013 pod_ready.go:81] duration metric: took 7.250424ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.783547 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783558 1148013 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.790100 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790143 1148013 pod_ready.go:81] duration metric: took 6.573129ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.790159 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.797457 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797498 1148013 pod_ready.go:81] duration metric: took 7.319359ms for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.797513 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797533 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.883109 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883149 1148013 pod_ready.go:81] duration metric: took 85.605451ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.883162 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.283454 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283484 1148013 pod_ready.go:81] duration metric: took 400.306586ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.283495 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283511 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.682926 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682965 1148013 pod_ready.go:81] duration metric: took 399.442627ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.682982 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682991 1148013 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:53.083528 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083573 1148013 pod_ready.go:81] duration metric: took 400.571455ms for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:53.083590 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083601 1148013 pod_ready.go:38] duration metric: took 1.315033985s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:53.083623 1148013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:29:53.095349 1148013 ops.go:34] apiserver oom_adj: -16
	I0731 21:29:53.095379 1148013 kubeadm.go:597] duration metric: took 8.785172139s to restartPrimaryControlPlane
	I0731 21:29:53.095391 1148013 kubeadm.go:394] duration metric: took 8.832597905s to StartCluster
	I0731 21:29:53.095416 1148013 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.095513 1148013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:53.097384 1148013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.097693 1148013 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:29:53.097768 1148013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:29:53.097863 1148013 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097905 1148013 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.097914 1148013 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:29:53.097918 1148013 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097949 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.097956 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:53.097964 1148013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-755535"
	I0731 21:29:53.097960 1148013 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.098052 1148013 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.098070 1148013 addons.go:243] addon metrics-server should already be in state true
	I0731 21:29:53.098129 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.098364 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098389 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098405 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098465 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098544 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098578 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.099612 1148013 out.go:177] * Verifying Kubernetes components...
	I0731 21:29:53.100943 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:53.116043 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0731 21:29:53.116121 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0731 21:29:53.116663 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.116670 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.117278 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117297 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117558 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117575 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117662 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.118320 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.118358 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.118788 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0731 21:29:53.118820 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.119468 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.119498 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.119509 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.120181 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.120208 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.120626 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.120828 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.125024 1148013 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.125051 1148013 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:29:53.125087 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.125470 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.125510 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.136521 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0731 21:29:53.137246 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.137866 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.137907 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.138331 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.138574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.140269 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0731 21:29:53.140615 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.140722 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.141377 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.141402 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.141846 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.142108 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.142832 1148013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:53.143979 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0731 21:29:53.144037 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.144302 1148013 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.144321 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:29:53.144342 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.145270 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.145539 1148013 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:29:49.766048 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:52.266842 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:53.145875 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.145898 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.146651 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.146842 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:29:53.146863 1148013 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:29:53.146891 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.147198 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.147235 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.148082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149156 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.149247 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149438 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.149635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.149758 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.149890 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.150082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150593 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.150624 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150825 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.151024 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.151193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.151423 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.164594 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0731 21:29:53.165088 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.165634 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.165649 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.165919 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.166093 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.167775 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.168002 1148013 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.168016 1148013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:29:53.168032 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.171696 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172236 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.172266 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172492 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.172717 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.172890 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.173081 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.313528 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:53.332410 1148013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:29:53.467443 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.481915 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:29:53.481943 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:29:53.503095 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.524005 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:29:53.524039 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:29:53.577476 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:53.577511 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:29:53.630711 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:54.451991 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452029 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452078 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452115 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452387 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452404 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452412 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452526 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452551 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452565 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452574 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452582 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452667 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452684 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452691 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452849 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452869 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.458865 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.458888 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.459191 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.459208 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472307 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472337 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.472690 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.472706 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.472713 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472733 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472742 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.473021 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.473070 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.473074 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.473086 1148013 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-755535"
	I0731 21:29:54.474920 1148013 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:29:50.101380 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:50.601347 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.101325 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.601381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.101364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.600852 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.101284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.601020 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.101330 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.601310 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.476085 1148013 addons.go:510] duration metric: took 1.378326564s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:29:55.338873 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.029262 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:52.029780 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:52.029807 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:52.029695 1148782 retry.go:31] will retry after 2.74211918s: waiting for machine to come up
	I0731 21:29:54.773318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.773762 1146656 main.go:141] libmachine: (no-preload-018891) Found IP for machine: 192.168.61.246
	I0731 21:29:54.773788 1146656 main.go:141] libmachine: (no-preload-018891) Reserving static IP address...
	I0731 21:29:54.773803 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has current primary IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.774221 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.774260 1146656 main.go:141] libmachine: (no-preload-018891) DBG | skip adding static IP to network mk-no-preload-018891 - found existing host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"}
	I0731 21:29:54.774275 1146656 main.go:141] libmachine: (no-preload-018891) Reserved static IP address: 192.168.61.246
	I0731 21:29:54.774320 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Getting to WaitForSSH function...
	I0731 21:29:54.774343 1146656 main.go:141] libmachine: (no-preload-018891) Waiting for SSH to be available...
	I0731 21:29:54.776952 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.777352 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777426 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH client type: external
	I0731 21:29:54.777466 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa (-rw-------)
	I0731 21:29:54.777506 1146656 main.go:141] libmachine: (no-preload-018891) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:54.777522 1146656 main.go:141] libmachine: (no-preload-018891) DBG | About to run SSH command:
	I0731 21:29:54.777564 1146656 main.go:141] libmachine: (no-preload-018891) DBG | exit 0
	I0731 21:29:54.908253 1146656 main.go:141] libmachine: (no-preload-018891) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:54.908614 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetConfigRaw
	I0731 21:29:54.909339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:54.911937 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.912345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912621 1146656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/config.json ...
	I0731 21:29:54.912837 1146656 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:54.912858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:54.913092 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:54.915328 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915698 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.915725 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915862 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:54.916060 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916209 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916385 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:54.916563 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:54.916797 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:54.916812 1146656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:55.032674 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:55.032715 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033152 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:29:55.033189 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033429 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.036142 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036488 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.036553 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036710 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.036938 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037373 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.037586 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.037851 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.037869 1146656 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-018891 && echo "no-preload-018891" | sudo tee /etc/hostname
	I0731 21:29:55.170895 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-018891
	
	I0731 21:29:55.170923 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.174018 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174357 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.174382 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174594 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.174835 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175025 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175153 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.175333 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.175578 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.175595 1146656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018891/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:55.296570 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:55.296606 1146656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:55.296634 1146656 buildroot.go:174] setting up certificates
	I0731 21:29:55.296645 1146656 provision.go:84] configureAuth start
	I0731 21:29:55.296658 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.297022 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:55.299891 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300300 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.300329 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300525 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.302808 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303146 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.303176 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303306 1146656 provision.go:143] copyHostCerts
	I0731 21:29:55.303365 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:55.303375 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:55.303430 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:55.303533 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:55.303541 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:55.303565 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:55.303638 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:55.303645 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:55.303662 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:55.303773 1146656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.no-preload-018891 san=[127.0.0.1 192.168.61.246 localhost minikube no-preload-018891]
	I0731 21:29:55.451740 1146656 provision.go:177] copyRemoteCerts
	I0731 21:29:55.451822 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:55.451858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.454972 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455327 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.455362 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455522 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.455783 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.455966 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.456166 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.541939 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:29:55.567967 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:55.593630 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:55.621511 1146656 provision.go:87] duration metric: took 324.845258ms to configureAuth
	I0731 21:29:55.621546 1146656 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:55.621737 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:29:55.621823 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.624639 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625021 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.625054 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625277 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.625515 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625755 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625921 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.626150 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.626404 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.626428 1146656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:55.896753 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:55.896785 1146656 machine.go:97] duration metric: took 983.934543ms to provisionDockerMachine
	I0731 21:29:55.896799 1146656 start.go:293] postStartSetup for "no-preload-018891" (driver="kvm2")
	I0731 21:29:55.896818 1146656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:55.896863 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:55.897196 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:55.897229 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.899769 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900156 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.900190 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.900612 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.900765 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.900903 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.987436 1146656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:55.991924 1146656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:55.991958 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:55.992027 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:55.992144 1146656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:55.992312 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:56.002524 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:56.026998 1146656 start.go:296] duration metric: took 130.182157ms for postStartSetup
	I0731 21:29:56.027046 1146656 fix.go:56] duration metric: took 18.009977848s for fixHost
	I0731 21:29:56.027071 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.029907 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030303 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.030324 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030493 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.030731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.030907 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.031055 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.031254 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:56.031490 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:56.031503 1146656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:56.149163 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461396.115095611
	
	I0731 21:29:56.149199 1146656 fix.go:216] guest clock: 1722461396.115095611
	I0731 21:29:56.149211 1146656 fix.go:229] Guest: 2024-07-31 21:29:56.115095611 +0000 UTC Remote: 2024-07-31 21:29:56.027049922 +0000 UTC m=+369.298206393 (delta=88.045689ms)
	I0731 21:29:56.149267 1146656 fix.go:200] guest clock delta is within tolerance: 88.045689ms
	I0731 21:29:56.149294 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 18.13224564s
	I0731 21:29:56.149320 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.149597 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:56.152941 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153307 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.153359 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153492 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154130 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154353 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154450 1146656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:56.154497 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.154650 1146656 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:56.154678 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.157376 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157795 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157838 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.157858 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158006 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158227 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158396 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.158422 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.158421 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158568 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158646 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.158731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158879 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.159051 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.241170 1146656 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:56.259519 1146656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:56.414823 1146656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:56.420732 1146656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:56.420805 1146656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:56.438423 1146656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:56.438461 1146656 start.go:495] detecting cgroup driver to use...
	I0731 21:29:56.438567 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:56.456069 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:56.471320 1146656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:56.471399 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:56.486206 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:56.501601 1146656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:56.623367 1146656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:56.774879 1146656 docker.go:233] disabling docker service ...
	I0731 21:29:56.774969 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:56.792295 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:56.809957 1146656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:56.961634 1146656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:57.102957 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:57.118907 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:57.139231 1146656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:29:57.139301 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.150471 1146656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:57.150547 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.160951 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.171556 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.182777 1146656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:57.196310 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.209689 1146656 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.227660 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.238058 1146656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:57.248326 1146656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:57.248388 1146656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:57.261076 1146656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:57.272002 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:57.406445 1146656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:57.540657 1146656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:57.540765 1146656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:57.546161 1146656 start.go:563] Will wait 60s for crictl version
	I0731 21:29:57.546233 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.550021 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:57.589152 1146656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:57.589272 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.618944 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.650646 1146656 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:29:54.766019 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:57.264179 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:59.264724 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:55.101321 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:55.600950 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.100785 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.601322 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.101431 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.101425 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.600958 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.100876 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.601349 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.837038 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.336837 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.836595 1148013 node_ready.go:49] node "default-k8s-diff-port-755535" has status "Ready":"True"
	I0731 21:30:00.836632 1148013 node_ready.go:38] duration metric: took 7.504184626s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:30:00.836644 1148013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:00.841523 1148013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846346 1148013 pod_ready.go:92] pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.846372 1148013 pod_ready.go:81] duration metric: took 4.815855ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846383 1148013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851118 1148013 pod_ready.go:92] pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.851140 1148013 pod_ready.go:81] duration metric: took 4.751019ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851151 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:57.651874 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:57.655070 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655529 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:57.655572 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655778 1146656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:57.659917 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:57.673863 1146656 kubeadm.go:883] updating cluster {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:57.674037 1146656 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:29:57.674099 1146656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:57.714187 1146656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:29:57.714225 1146656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:57.714285 1146656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.714317 1146656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.714345 1146656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.714370 1146656 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.714378 1146656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.714348 1146656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.714420 1146656 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.714458 1146656 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716109 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.716123 1146656 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.716147 1146656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.716161 1146656 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716168 1146656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.716119 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.716527 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.716549 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.848967 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.869777 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.881111 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 21:29:57.888022 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.892714 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.893611 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.908421 1146656 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 21:29:57.908493 1146656 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.908554 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.914040 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.985691 1146656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 21:29:57.985757 1146656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.985814 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.128813 1146656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 21:29:58.128930 1146656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.128947 1146656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 21:29:58.128996 1146656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.129046 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129061 1146656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 21:29:58.129088 1146656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.129115 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129000 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129194 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:58.129262 1146656 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 21:29:58.129309 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:58.129312 1146656 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.129389 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.141411 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.141477 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.212758 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 21:29:58.212783 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212847 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.212860 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.212928 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212933 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:29:58.226942 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227020 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.227057 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227113 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.265352 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 21:29:58.265470 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:29:58.276064 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276115 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276128 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 21:29:58.276150 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276176 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276186 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276213 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 21:29:58.276248 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.276359 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.280583 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 21:29:58.363934 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.050742 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.774531298s)
	I0731 21:30:01.050793 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 21:30:01.050832 1146656 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050926 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050839 1146656 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.686857972s)
	I0731 21:30:01.051031 1146656 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 21:30:01.051073 1146656 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.051118 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:30:01.266241 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:03.764462 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:00.101336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:00.601036 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.101381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.601371 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.100649 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.601354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.101316 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.101099 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.601146 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.860276 1148013 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:04.360452 1148013 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.360479 1148013 pod_ready.go:81] duration metric: took 3.509320908s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.360496 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367733 1148013 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.367757 1148013 pod_ready.go:81] duration metric: took 7.253266ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367768 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372693 1148013 pod_ready.go:92] pod "kube-proxy-mqcmt" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.372719 1148013 pod_ready.go:81] duration metric: took 4.944626ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372728 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436318 1148013 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.436345 1148013 pod_ready.go:81] duration metric: took 63.609569ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436356 1148013 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.339084 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.288125508s)
	I0731 21:30:04.339126 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 21:30:04.339141 1146656 ssh_runner.go:235] Completed: which crictl: (3.288000381s)
	I0731 21:30:04.339164 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:04.339223 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:04.339234 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:06.225796 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886536121s)
	I0731 21:30:06.225852 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 21:30:06.225875 1146656 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.886627424s)
	I0731 21:30:06.225900 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.225933 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 21:30:06.225987 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.226038 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:05.764555 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:07.766002 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:05.100624 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:05.600680 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.101286 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.601308 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.100801 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.600703 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.101252 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.601341 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.101049 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.601284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.443235 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.444797 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.950200 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.198750 1146656 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972673111s)
	I0731 21:30:08.198802 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 21:30:08.198831 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.972821334s)
	I0731 21:30:08.198850 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 21:30:08.198878 1146656 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:08.198956 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:10.054141 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.855149734s)
	I0731 21:30:10.054181 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 21:30:10.054209 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:10.054263 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:11.506212 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.45191421s)
	I0731 21:30:11.506252 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 21:30:11.506294 1146656 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:11.506390 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:10.263896 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.264903 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:14.265574 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.100825 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:10.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.101377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.601357 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.100679 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.600724 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.101278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.600992 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.101359 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.601364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.443063 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:15.443624 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.356725 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 21:30:12.356768 1146656 cache_images.go:123] Successfully loaded all cached images
	I0731 21:30:12.356773 1146656 cache_images.go:92] duration metric: took 14.642536081s to LoadCachedImages
	I0731 21:30:12.356786 1146656 kubeadm.go:934] updating node { 192.168.61.246 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:30:12.356931 1146656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-018891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:30:12.357036 1146656 ssh_runner.go:195] Run: crio config
	I0731 21:30:12.404684 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:12.404711 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:12.404728 1146656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:30:12.404752 1146656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.246 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018891 NodeName:no-preload-018891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:30:12.404917 1146656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018891"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:30:12.404999 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:30:12.416421 1146656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:30:12.416516 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:30:12.426572 1146656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 21:30:12.444613 1146656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:30:12.461161 1146656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 21:30:12.478872 1146656 ssh_runner.go:195] Run: grep 192.168.61.246	control-plane.minikube.internal$ /etc/hosts
	I0731 21:30:12.482736 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:30:12.502603 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:12.617670 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:12.634477 1146656 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891 for IP: 192.168.61.246
	I0731 21:30:12.634508 1146656 certs.go:194] generating shared ca certs ...
	I0731 21:30:12.634532 1146656 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:12.634740 1146656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:30:12.634799 1146656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:30:12.634813 1146656 certs.go:256] generating profile certs ...
	I0731 21:30:12.634961 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.key
	I0731 21:30:12.635052 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key.54e88c10
	I0731 21:30:12.635108 1146656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key
	I0731 21:30:12.635312 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:30:12.635379 1146656 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:30:12.635394 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:30:12.635433 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:30:12.635465 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:30:12.635500 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:30:12.635557 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:30:12.636406 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:30:12.672156 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:30:12.702346 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:30:12.731602 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:30:12.777601 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:30:12.813409 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:30:12.841076 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:30:12.866418 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:30:12.890716 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:30:12.915792 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:30:12.940826 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:30:12.966374 1146656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:30:12.984533 1146656 ssh_runner.go:195] Run: openssl version
	I0731 21:30:12.990538 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:30:13.002053 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006781 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006862 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.012728 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:30:13.024167 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:30:13.035617 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040041 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040150 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.046193 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:30:13.058141 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:30:13.070085 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074720 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074811 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.080498 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:30:13.092497 1146656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:30:13.097275 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:30:13.103762 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:30:13.110267 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:30:13.118325 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:30:13.124784 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:30:13.131502 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:30:13.138736 1146656 kubeadm.go:392] StartCluster: {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:30:13.138837 1146656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:30:13.138888 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.178222 1146656 cri.go:89] found id: ""
	I0731 21:30:13.178304 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:30:13.188552 1146656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:30:13.188580 1146656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:30:13.188634 1146656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:30:13.198424 1146656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:30:13.199620 1146656 kubeconfig.go:125] found "no-preload-018891" server: "https://192.168.61.246:8443"
	I0731 21:30:13.202067 1146656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:30:13.213244 1146656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.246
	I0731 21:30:13.213286 1146656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:30:13.213303 1146656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:30:13.213719 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.253396 1146656 cri.go:89] found id: ""
	I0731 21:30:13.253478 1146656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:30:13.270269 1146656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:30:13.280405 1146656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:30:13.280431 1146656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:30:13.280479 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:30:13.289979 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:30:13.290047 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:30:13.299871 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:30:13.309257 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:30:13.309342 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:30:13.319593 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.329418 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:30:13.329486 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.339419 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:30:13.348971 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:30:13.349036 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:30:13.358887 1146656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:30:13.368643 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:13.485786 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.401198 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.599529 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.677307 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.765353 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:30:14.765468 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.266329 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.766054 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.786157 1146656 api_server.go:72] duration metric: took 1.020803565s to wait for apiserver process to appear ...
	I0731 21:30:15.786189 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:30:15.786217 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:16.265710 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.766148 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.439856 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.439896 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.439914 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.492649 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.492690 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.787081 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.810263 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:18.810302 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.286734 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.291964 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:19.291999 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.786505 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.796699 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:30:19.807525 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:30:19.807566 1146656 api_server.go:131] duration metric: took 4.02136792s to wait for apiserver health ...
	I0731 21:30:19.807579 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:19.807588 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:19.809353 1146656 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:30:15.101218 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.600733 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.101137 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.601585 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.101343 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.101295 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.601307 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.100682 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.601155 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.942857 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.943771 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.810433 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:30:19.821002 1146656 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:30:19.868402 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:30:19.883129 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:30:19.883180 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:30:19.883192 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:30:19.883204 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:30:19.883212 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:30:19.883221 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:30:19.883229 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:30:19.883242 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:30:19.883261 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:30:19.883274 1146656 system_pods.go:74] duration metric: took 14.843323ms to wait for pod list to return data ...
	I0731 21:30:19.883284 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:30:19.897327 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:30:19.897368 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:30:19.897382 1146656 node_conditions.go:105] duration metric: took 14.091172ms to run NodePressure ...
	I0731 21:30:19.897407 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:20.196896 1146656 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:30:20.202966 1146656 kubeadm.go:739] kubelet initialised
	I0731 21:30:20.202990 1146656 kubeadm.go:740] duration metric: took 6.059782ms waiting for restarted kubelet to initialise ...
	I0731 21:30:20.203000 1146656 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:20.208123 1146656 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.214186 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214236 1146656 pod_ready.go:81] duration metric: took 6.07909ms for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.214247 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214253 1146656 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.220223 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220256 1146656 pod_ready.go:81] duration metric: took 5.988701ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.220267 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220273 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.228507 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228536 1146656 pod_ready.go:81] duration metric: took 8.255655ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.228545 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228553 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.272704 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272743 1146656 pod_ready.go:81] duration metric: took 44.182664ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.272755 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272777 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.673129 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673158 1146656 pod_ready.go:81] duration metric: took 400.361902ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.673170 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673177 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.072429 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072460 1146656 pod_ready.go:81] duration metric: took 399.27644ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.072471 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072478 1146656 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.472593 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472626 1146656 pod_ready.go:81] duration metric: took 400.13982ms for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.472637 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472645 1146656 pod_ready.go:38] duration metric: took 1.26963694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:21.472664 1146656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:30:21.484323 1146656 ops.go:34] apiserver oom_adj: -16
	I0731 21:30:21.484351 1146656 kubeadm.go:597] duration metric: took 8.295763074s to restartPrimaryControlPlane
	I0731 21:30:21.484361 1146656 kubeadm.go:394] duration metric: took 8.34563439s to StartCluster
	I0731 21:30:21.484379 1146656 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.484460 1146656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:30:21.486137 1146656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.486409 1146656 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:30:21.486485 1146656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:30:21.486584 1146656 addons.go:69] Setting storage-provisioner=true in profile "no-preload-018891"
	I0731 21:30:21.486615 1146656 addons.go:234] Setting addon storage-provisioner=true in "no-preload-018891"
	I0731 21:30:21.486646 1146656 addons.go:69] Setting metrics-server=true in profile "no-preload-018891"
	I0731 21:30:21.486692 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:30:21.486707 1146656 addons.go:234] Setting addon metrics-server=true in "no-preload-018891"
	W0731 21:30:21.486718 1146656 addons.go:243] addon metrics-server should already be in state true
	I0731 21:30:21.486759 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	W0731 21:30:21.486664 1146656 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:30:21.486850 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.486615 1146656 addons.go:69] Setting default-storageclass=true in profile "no-preload-018891"
	I0731 21:30:21.486954 1146656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018891"
	I0731 21:30:21.487107 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487150 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487230 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487267 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487371 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487406 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.488066 1146656 out.go:177] * Verifying Kubernetes components...
	I0731 21:30:21.489491 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:21.503876 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0731 21:30:21.504017 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0731 21:30:21.504086 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0731 21:30:21.504598 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504642 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504682 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.505173 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505193 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505199 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505213 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505305 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505327 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505554 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505629 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505639 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505831 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.506154 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506164 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.508914 1146656 addons.go:234] Setting addon default-storageclass=true in "no-preload-018891"
	W0731 21:30:21.508932 1146656 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:30:21.508957 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.509187 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.509213 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.526066 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0731 21:30:21.528731 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.529285 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.529311 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.529784 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.530000 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.532450 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.534700 1146656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:21.536115 1146656 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.536141 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:30:21.536170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.540044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540592 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.540622 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540851 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.541104 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.541270 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.541425 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.547128 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0731 21:30:21.547184 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0731 21:30:21.547786 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.547865 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.548426 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548445 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548429 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548466 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548780 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548845 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548959 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.549425 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.549473 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.551116 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.553068 1146656 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:30:21.554401 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:30:21.554418 1146656 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:30:21.554445 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.557987 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558385 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.558410 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558728 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.558976 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.559164 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.559326 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.569320 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0731 21:30:21.569956 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.570511 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.570534 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.571119 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.571339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.573316 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.573563 1146656 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.573585 1146656 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:30:21.573604 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.576643 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577012 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.577044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577214 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.577511 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.577688 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.577849 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.700050 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:21.717247 1146656 node_ready.go:35] waiting up to 6m0s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:21.798175 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.818043 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:30:21.818078 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:30:21.823805 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.862781 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:30:21.862812 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:30:21.898427 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:21.898457 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:30:21.948766 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:23.027256 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.229032744s)
	I0731 21:30:23.027318 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027322 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203467073s)
	I0731 21:30:23.027367 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027401 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.078593532s)
	I0731 21:30:23.027335 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027442 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027459 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027708 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027714 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027723 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027728 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027732 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027738 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027740 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027746 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027794 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027808 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027818 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.027827 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027991 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028003 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028037 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028056 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028061 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028071 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028081 1146656 addons.go:475] Verifying addon metrics-server=true in "no-preload-018891"
	I0731 21:30:23.028084 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.028119 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.034930 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.034965 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.035312 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.035333 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.035346 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.037042 1146656 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0731 21:30:21.264247 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.264691 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:20.100856 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:20.601336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.101059 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.100791 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.601360 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.600731 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.601285 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.945141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:24.442664 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.038375 1146656 addons.go:510] duration metric: took 1.551892195s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0731 21:30:23.721386 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.721450 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.264972 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:27.266151 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:25.101043 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:25.601045 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.101312 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.600559 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:27.100884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:27.100987 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:27.138104 1147424 cri.go:89] found id: ""
	I0731 21:30:27.138142 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.138154 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:27.138163 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:27.138233 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:27.175030 1147424 cri.go:89] found id: ""
	I0731 21:30:27.175068 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.175080 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:27.175088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:27.175158 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:27.209891 1147424 cri.go:89] found id: ""
	I0731 21:30:27.209925 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.209934 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:27.209941 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:27.209992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:27.247117 1147424 cri.go:89] found id: ""
	I0731 21:30:27.247154 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.247163 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:27.247170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:27.247236 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:27.286595 1147424 cri.go:89] found id: ""
	I0731 21:30:27.286625 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.286633 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:27.286639 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:27.286695 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:27.321169 1147424 cri.go:89] found id: ""
	I0731 21:30:27.321201 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.321218 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:27.321226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:27.321310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:27.356278 1147424 cri.go:89] found id: ""
	I0731 21:30:27.356306 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.356317 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:27.356323 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:27.356386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:27.390351 1147424 cri.go:89] found id: ""
	I0731 21:30:27.390378 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.390387 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:27.390398 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:27.390412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:27.440412 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:27.440451 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:27.454295 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:27.454330 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:27.575971 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:27.575999 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:27.576018 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:27.639090 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:27.639141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:26.442847 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.943311 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.221333 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:29.221116 1146656 node_ready.go:49] node "no-preload-018891" has status "Ready":"True"
	I0731 21:30:29.221150 1146656 node_ready.go:38] duration metric: took 7.50385465s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:29.221161 1146656 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:29.226655 1146656 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:31.233713 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:29.764835 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:31.764914 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:34.264305 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:30.177467 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:30.191103 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:30.191179 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:30.226529 1147424 cri.go:89] found id: ""
	I0731 21:30:30.226575 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.226584 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:30.226591 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:30.226653 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:30.262162 1147424 cri.go:89] found id: ""
	I0731 21:30:30.262193 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.262202 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:30.262209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:30.262275 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:30.301663 1147424 cri.go:89] found id: ""
	I0731 21:30:30.301698 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.301706 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:30.301713 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:30.301769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:30.342073 1147424 cri.go:89] found id: ""
	I0731 21:30:30.342105 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.342117 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:30.342125 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:30.342199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:30.375980 1147424 cri.go:89] found id: ""
	I0731 21:30:30.376013 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.376024 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:30.376033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:30.376114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:30.409852 1147424 cri.go:89] found id: ""
	I0731 21:30:30.409892 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.409900 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:30.409907 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:30.409960 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:30.444551 1147424 cri.go:89] found id: ""
	I0731 21:30:30.444592 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.444604 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:30.444612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:30.444672 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:30.481953 1147424 cri.go:89] found id: ""
	I0731 21:30:30.481987 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.481995 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:30.482006 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:30.482024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:30.533740 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:30.533785 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:30.546789 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:30.546831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:30.622294 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:30.622321 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:30.622338 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:30.693871 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:30.693922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.236318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:33.249452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:33.249545 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:33.288064 1147424 cri.go:89] found id: ""
	I0731 21:30:33.288110 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.288124 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:33.288133 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:33.288208 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:33.321269 1147424 cri.go:89] found id: ""
	I0731 21:30:33.321298 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.321307 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:33.321313 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:33.321368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:33.357078 1147424 cri.go:89] found id: ""
	I0731 21:30:33.357125 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.357133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:33.357140 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:33.357206 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:33.393556 1147424 cri.go:89] found id: ""
	I0731 21:30:33.393587 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.393598 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:33.393608 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:33.393674 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:33.427311 1147424 cri.go:89] found id: ""
	I0731 21:30:33.427347 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.427359 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:33.427368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:33.427438 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:33.462424 1147424 cri.go:89] found id: ""
	I0731 21:30:33.462463 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.462474 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:33.462482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:33.462557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:33.499271 1147424 cri.go:89] found id: ""
	I0731 21:30:33.499302 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.499311 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:33.499320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:33.499395 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:33.536341 1147424 cri.go:89] found id: ""
	I0731 21:30:33.536372 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.536382 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:33.536392 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:33.536406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:33.606582 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:33.606621 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:33.606640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:33.682704 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:33.682757 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.722410 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:33.722456 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:33.778845 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:33.778888 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:31.442470 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.443996 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:35.944317 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.735206 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.234503 1146656 pod_ready.go:92] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.234535 1146656 pod_ready.go:81] duration metric: took 7.007846047s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.234557 1146656 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240361 1146656 pod_ready.go:92] pod "etcd-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.240396 1146656 pod_ready.go:81] duration metric: took 5.830601ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240410 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246667 1146656 pod_ready.go:92] pod "kube-apiserver-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.246697 1146656 pod_ready.go:81] duration metric: took 6.278754ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246707 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252616 1146656 pod_ready.go:92] pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.252646 1146656 pod_ready.go:81] duration metric: took 5.931893ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252657 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257929 1146656 pod_ready.go:92] pod "kube-proxy-x2dnn" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.257962 1146656 pod_ready.go:81] duration metric: took 5.298921ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257976 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632686 1146656 pod_ready.go:92] pod "kube-scheduler-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.632723 1146656 pod_ready.go:81] duration metric: took 374.739035ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632737 1146656 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.265196 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.265807 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.293569 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:36.311120 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:36.311235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:36.350558 1147424 cri.go:89] found id: ""
	I0731 21:30:36.350589 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.350596 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:36.350602 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:36.350655 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:36.387804 1147424 cri.go:89] found id: ""
	I0731 21:30:36.387841 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.387849 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:36.387855 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:36.387912 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:36.427225 1147424 cri.go:89] found id: ""
	I0731 21:30:36.427263 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.427273 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:36.427280 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:36.427367 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:36.470864 1147424 cri.go:89] found id: ""
	I0731 21:30:36.470896 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.470908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:36.470917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:36.470985 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:36.523075 1147424 cri.go:89] found id: ""
	I0731 21:30:36.523109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.523117 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:36.523124 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:36.523188 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:36.598071 1147424 cri.go:89] found id: ""
	I0731 21:30:36.598109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.598120 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:36.598129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:36.598200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:36.638277 1147424 cri.go:89] found id: ""
	I0731 21:30:36.638314 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.638326 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:36.638335 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:36.638402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:36.673112 1147424 cri.go:89] found id: ""
	I0731 21:30:36.673152 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.673164 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:36.673180 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:36.673197 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:36.728197 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:36.728245 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:36.742034 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:36.742072 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:36.815584 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:36.815617 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:36.815635 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:36.894418 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:36.894464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.436637 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:39.449708 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:39.449823 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:39.490244 1147424 cri.go:89] found id: ""
	I0731 21:30:39.490281 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.490293 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:39.490301 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:39.490365 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:39.523568 1147424 cri.go:89] found id: ""
	I0731 21:30:39.523601 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.523625 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:39.523640 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:39.523723 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:39.558966 1147424 cri.go:89] found id: ""
	I0731 21:30:39.559004 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.559017 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:39.559025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:39.559092 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:39.592002 1147424 cri.go:89] found id: ""
	I0731 21:30:39.592037 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.592049 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:39.592058 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:39.592145 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:39.624596 1147424 cri.go:89] found id: ""
	I0731 21:30:39.624634 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.624646 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:39.624655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:39.624722 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:39.658928 1147424 cri.go:89] found id: ""
	I0731 21:30:39.658957 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.658965 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:39.658973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:39.659024 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:39.692725 1147424 cri.go:89] found id: ""
	I0731 21:30:39.692766 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.692779 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:39.692788 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:39.692857 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:39.728770 1147424 cri.go:89] found id: ""
	I0731 21:30:39.728811 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.728823 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:39.728837 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:39.728854 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:39.799162 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:39.799193 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:39.799213 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:38.443560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.942937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.638956 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.640407 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.764748 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.765335 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:39.884581 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:39.884625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.923650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:39.923687 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:39.977735 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:39.977787 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.491668 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:42.513530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:42.513623 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:42.563932 1147424 cri.go:89] found id: ""
	I0731 21:30:42.563968 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.563982 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:42.563991 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:42.564067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:42.598089 1147424 cri.go:89] found id: ""
	I0731 21:30:42.598122 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.598131 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:42.598138 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:42.598199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:42.631493 1147424 cri.go:89] found id: ""
	I0731 21:30:42.631528 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.631540 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:42.631549 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:42.631626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:42.668358 1147424 cri.go:89] found id: ""
	I0731 21:30:42.668395 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.668408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:42.668416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:42.668484 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:42.701115 1147424 cri.go:89] found id: ""
	I0731 21:30:42.701150 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.701161 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:42.701170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:42.701248 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:42.736626 1147424 cri.go:89] found id: ""
	I0731 21:30:42.736665 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.736678 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:42.736687 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:42.736759 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:42.769864 1147424 cri.go:89] found id: ""
	I0731 21:30:42.769897 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.769904 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:42.769910 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:42.769964 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:42.803441 1147424 cri.go:89] found id: ""
	I0731 21:30:42.803477 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.803486 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:42.803497 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:42.803514 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.817556 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:42.817591 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:42.885011 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:42.885040 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:42.885055 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:42.964799 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:42.964851 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:43.015621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:43.015675 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:42.942984 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.943126 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.641436 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.139036 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.766405 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:46.766520 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.265061 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.568268 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:45.580867 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:45.580952 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:45.614028 1147424 cri.go:89] found id: ""
	I0731 21:30:45.614066 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.614076 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:45.614082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:45.614152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:45.650207 1147424 cri.go:89] found id: ""
	I0731 21:30:45.650235 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.650245 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:45.650254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:45.650321 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:45.684405 1147424 cri.go:89] found id: ""
	I0731 21:30:45.684433 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.684444 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:45.684452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:45.684540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:45.718355 1147424 cri.go:89] found id: ""
	I0731 21:30:45.718397 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.718408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:45.718416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:45.718501 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:45.755484 1147424 cri.go:89] found id: ""
	I0731 21:30:45.755532 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.755554 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:45.755563 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:45.755638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:45.791243 1147424 cri.go:89] found id: ""
	I0731 21:30:45.791277 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.791290 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:45.791298 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:45.791368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:45.827118 1147424 cri.go:89] found id: ""
	I0731 21:30:45.827157 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.827169 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:45.827177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:45.827244 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:45.866131 1147424 cri.go:89] found id: ""
	I0731 21:30:45.866166 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.866177 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:45.866191 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:45.866207 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:45.919945 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:45.919988 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:45.935650 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:45.935685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:46.008387 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:46.008417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:46.008437 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:46.087063 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:46.087119 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:48.626079 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:48.639423 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:48.639502 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:48.673340 1147424 cri.go:89] found id: ""
	I0731 21:30:48.673371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.673380 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:48.673388 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:48.673457 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:48.707662 1147424 cri.go:89] found id: ""
	I0731 21:30:48.707694 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.707704 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:48.707712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:48.707786 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:48.741679 1147424 cri.go:89] found id: ""
	I0731 21:30:48.741716 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.741728 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:48.741736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:48.741807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:48.780939 1147424 cri.go:89] found id: ""
	I0731 21:30:48.780969 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.780980 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:48.780987 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:48.781050 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:48.818882 1147424 cri.go:89] found id: ""
	I0731 21:30:48.818912 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.818920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:48.818927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:48.818982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:48.858012 1147424 cri.go:89] found id: ""
	I0731 21:30:48.858044 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.858056 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:48.858065 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:48.858140 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:48.894753 1147424 cri.go:89] found id: ""
	I0731 21:30:48.894787 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.894795 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:48.894802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:48.894863 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:48.927020 1147424 cri.go:89] found id: ""
	I0731 21:30:48.927056 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.927066 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:48.927078 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:48.927099 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:48.983634 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:48.983678 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:48.998249 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:48.998280 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:49.068981 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:49.069006 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:49.069024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:49.154613 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:49.154658 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:46.943398 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:48.953937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:47.139335 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.139858 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.139967 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.764837 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.265088 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.693023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:51.706145 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:51.706246 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:51.737003 1147424 cri.go:89] found id: ""
	I0731 21:30:51.737032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.737041 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:51.737046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:51.737114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:51.772405 1147424 cri.go:89] found id: ""
	I0731 21:30:51.772441 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.772452 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:51.772461 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:51.772518 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.805868 1147424 cri.go:89] found id: ""
	I0731 21:30:51.805900 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.805910 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:51.805918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:51.805986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:51.841996 1147424 cri.go:89] found id: ""
	I0731 21:30:51.842032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.842045 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:51.842054 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:51.842130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:51.874698 1147424 cri.go:89] found id: ""
	I0731 21:30:51.874734 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.874746 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:51.874755 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:51.874824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:51.908924 1147424 cri.go:89] found id: ""
	I0731 21:30:51.908955 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.908967 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:51.908973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:51.909037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:51.945056 1147424 cri.go:89] found id: ""
	I0731 21:30:51.945085 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.945096 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:51.945104 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:51.945167 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:51.979480 1147424 cri.go:89] found id: ""
	I0731 21:30:51.979513 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.979538 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:51.979552 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:51.979571 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:52.055960 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:52.055992 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:52.056009 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:52.132988 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:52.133039 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:52.172054 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:52.172098 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:52.226311 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:52.226355 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:54.741919 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:54.755241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:54.755319 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:54.789532 1147424 cri.go:89] found id: ""
	I0731 21:30:54.789563 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.789574 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:54.789583 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:54.789652 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:54.824196 1147424 cri.go:89] found id: ""
	I0731 21:30:54.824229 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.824240 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:54.824248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:54.824314 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.443199 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.944480 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.140181 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:55.144767 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:56.265184 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.765513 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.860579 1147424 cri.go:89] found id: ""
	I0731 21:30:54.860611 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.860620 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:54.860627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:54.860679 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:54.897438 1147424 cri.go:89] found id: ""
	I0731 21:30:54.897472 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.897484 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:54.897493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:54.897569 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:54.935283 1147424 cri.go:89] found id: ""
	I0731 21:30:54.935318 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.935330 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:54.935339 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:54.935409 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:54.970819 1147424 cri.go:89] found id: ""
	I0731 21:30:54.970850 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.970858 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:54.970865 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:54.970916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:55.004983 1147424 cri.go:89] found id: ""
	I0731 21:30:55.005019 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.005029 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:55.005038 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:55.005111 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:55.040711 1147424 cri.go:89] found id: ""
	I0731 21:30:55.040740 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.040749 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:55.040760 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:55.040774 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:55.117255 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:55.117290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:55.117308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:55.195423 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:55.195466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:55.234017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:55.234050 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:55.287518 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:55.287562 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:57.802888 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:57.816049 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:57.816152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:57.849582 1147424 cri.go:89] found id: ""
	I0731 21:30:57.849616 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.849627 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:57.849635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:57.849713 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:57.883334 1147424 cri.go:89] found id: ""
	I0731 21:30:57.883371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.883382 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:57.883391 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:57.883459 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:57.917988 1147424 cri.go:89] found id: ""
	I0731 21:30:57.918018 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.918028 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:57.918034 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:57.918095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:57.956169 1147424 cri.go:89] found id: ""
	I0731 21:30:57.956205 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.956217 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:57.956229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:57.956296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:57.992259 1147424 cri.go:89] found id: ""
	I0731 21:30:57.992291 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.992301 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:57.992308 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:57.992371 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:58.027969 1147424 cri.go:89] found id: ""
	I0731 21:30:58.027996 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.028006 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:58.028013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:58.028065 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:58.063018 1147424 cri.go:89] found id: ""
	I0731 21:30:58.063048 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.063057 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:58.063064 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:58.063117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:58.097096 1147424 cri.go:89] found id: ""
	I0731 21:30:58.097131 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.097143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:58.097158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:58.097175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:58.137311 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:58.137341 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:58.186533 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:58.186575 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:58.200436 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:58.200469 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:58.270006 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:58.270033 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:58.270053 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:56.444446 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.942906 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.943227 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:57.639057 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.140108 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:01.265139 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:03.266080 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.855423 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:00.868032 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:00.868128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:00.901453 1147424 cri.go:89] found id: ""
	I0731 21:31:00.901486 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.901498 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:00.901506 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:00.901586 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:00.940566 1147424 cri.go:89] found id: ""
	I0731 21:31:00.940598 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.940614 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:00.940623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:00.940693 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:00.975729 1147424 cri.go:89] found id: ""
	I0731 21:31:00.975767 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.975778 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:00.975785 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:00.975852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:01.010713 1147424 cri.go:89] found id: ""
	I0731 21:31:01.010747 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.010759 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:01.010768 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:01.010842 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:01.044675 1147424 cri.go:89] found id: ""
	I0731 21:31:01.044709 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.044718 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:01.044725 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:01.044785 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:01.078574 1147424 cri.go:89] found id: ""
	I0731 21:31:01.078614 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.078625 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:01.078634 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:01.078696 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:01.116013 1147424 cri.go:89] found id: ""
	I0731 21:31:01.116051 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.116062 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:01.116071 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:01.116161 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:01.152596 1147424 cri.go:89] found id: ""
	I0731 21:31:01.152631 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.152640 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:01.152650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:01.152666 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:01.203674 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:01.203726 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:01.218212 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:01.218261 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:01.290579 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:01.290604 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:01.290621 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:01.369885 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:01.369929 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:03.910280 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:03.923195 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:03.923276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:03.958378 1147424 cri.go:89] found id: ""
	I0731 21:31:03.958411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.958420 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:03.958427 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:03.958496 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:03.993096 1147424 cri.go:89] found id: ""
	I0731 21:31:03.993128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.993139 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:03.993148 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:03.993219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:04.029519 1147424 cri.go:89] found id: ""
	I0731 21:31:04.029552 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.029561 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:04.029569 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:04.029625 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:04.065597 1147424 cri.go:89] found id: ""
	I0731 21:31:04.065633 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.065643 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:04.065652 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:04.065719 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:04.101708 1147424 cri.go:89] found id: ""
	I0731 21:31:04.101744 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.101755 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:04.101763 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:04.101835 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:04.137732 1147424 cri.go:89] found id: ""
	I0731 21:31:04.137773 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.137783 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:04.137792 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:04.137866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:04.173141 1147424 cri.go:89] found id: ""
	I0731 21:31:04.173173 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.173188 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:04.173197 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:04.173269 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:04.208707 1147424 cri.go:89] found id: ""
	I0731 21:31:04.208742 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.208753 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:04.208770 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:04.208789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:04.279384 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:04.279417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:04.279498 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:04.362158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:04.362203 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:04.401372 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:04.401412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:04.453988 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:04.454047 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:03.443745 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.942529 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:02.639283 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:04.639372 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.765358 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:08.265854 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:06.968373 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:06.982182 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:06.982268 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:07.018082 1147424 cri.go:89] found id: ""
	I0731 21:31:07.018112 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.018122 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:07.018129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:07.018197 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:07.050272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.050309 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.050319 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:07.050325 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:07.050392 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:07.085174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.085206 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.085215 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:07.085221 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:07.085285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:07.119239 1147424 cri.go:89] found id: ""
	I0731 21:31:07.119274 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.119282 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:07.119289 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:07.119353 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:07.156846 1147424 cri.go:89] found id: ""
	I0731 21:31:07.156876 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.156883 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:07.156889 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:07.156942 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:07.191272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.191305 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.191314 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:07.191320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:07.191384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:07.231174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.231209 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.231221 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:07.231231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:07.231295 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:07.266525 1147424 cri.go:89] found id: ""
	I0731 21:31:07.266551 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.266558 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:07.266567 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:07.266589 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:07.306626 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:07.306659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:07.360568 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:07.360625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:07.374630 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:07.374665 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:07.444054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:07.444081 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:07.444118 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:07.942657 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.943080 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:07.140848 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.639749 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.266538 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.268527 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.030591 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:10.043498 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:10.043571 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:10.076835 1147424 cri.go:89] found id: ""
	I0731 21:31:10.076875 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.076887 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:10.076897 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:10.076966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:10.111342 1147424 cri.go:89] found id: ""
	I0731 21:31:10.111384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.111396 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:10.111404 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:10.111473 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:10.146858 1147424 cri.go:89] found id: ""
	I0731 21:31:10.146896 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.146911 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:10.146920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:10.146989 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:10.180682 1147424 cri.go:89] found id: ""
	I0731 21:31:10.180717 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.180729 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:10.180738 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:10.180804 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:10.215147 1147424 cri.go:89] found id: ""
	I0731 21:31:10.215177 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.215186 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:10.215192 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:10.215249 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:10.248291 1147424 cri.go:89] found id: ""
	I0731 21:31:10.248327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.248336 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:10.248343 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:10.248398 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:10.284207 1147424 cri.go:89] found id: ""
	I0731 21:31:10.284241 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.284252 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:10.284259 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:10.284325 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:10.318286 1147424 cri.go:89] found id: ""
	I0731 21:31:10.318322 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.318331 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:10.318342 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:10.318356 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:10.368429 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:10.368476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:10.383638 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:10.383673 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:10.450696 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:10.450720 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:10.450742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:10.530413 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:10.530458 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.084947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:13.098074 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:13.098156 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:13.132915 1147424 cri.go:89] found id: ""
	I0731 21:31:13.132952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.132962 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:13.132968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:13.133037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:13.173568 1147424 cri.go:89] found id: ""
	I0731 21:31:13.173597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.173605 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:13.173612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:13.173668 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:13.207356 1147424 cri.go:89] found id: ""
	I0731 21:31:13.207388 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.207402 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:13.207411 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:13.207478 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:13.243452 1147424 cri.go:89] found id: ""
	I0731 21:31:13.243482 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.243493 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:13.243502 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:13.243587 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:13.278682 1147424 cri.go:89] found id: ""
	I0731 21:31:13.278719 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.278729 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:13.278736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:13.278794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:13.312698 1147424 cri.go:89] found id: ""
	I0731 21:31:13.312727 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.312735 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:13.312742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:13.312796 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:13.346223 1147424 cri.go:89] found id: ""
	I0731 21:31:13.346259 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.346270 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:13.346279 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:13.346350 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:13.380778 1147424 cri.go:89] found id: ""
	I0731 21:31:13.380819 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.380833 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:13.380847 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:13.380889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:13.394337 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:13.394372 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:13.472260 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:13.472290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:13.472308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:13.549561 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:13.549608 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.589373 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:13.589416 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:11.943150 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.443284 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.140029 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.641142 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.765639 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.265180 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.265765 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:16.143472 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:16.155966 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:16.156039 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:16.194187 1147424 cri.go:89] found id: ""
	I0731 21:31:16.194216 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.194224 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:16.194231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:16.194299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:16.228700 1147424 cri.go:89] found id: ""
	I0731 21:31:16.228738 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.228751 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:16.228760 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:16.228844 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:16.261597 1147424 cri.go:89] found id: ""
	I0731 21:31:16.261629 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.261640 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:16.261647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:16.261716 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:16.299664 1147424 cri.go:89] found id: ""
	I0731 21:31:16.299697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.299709 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:16.299718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:16.299780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:16.350144 1147424 cri.go:89] found id: ""
	I0731 21:31:16.350172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.350181 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:16.350188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:16.350254 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:16.385259 1147424 cri.go:89] found id: ""
	I0731 21:31:16.385294 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.385303 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:16.385310 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:16.385364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:16.419555 1147424 cri.go:89] found id: ""
	I0731 21:31:16.419597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.419610 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:16.419619 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:16.419714 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:16.455956 1147424 cri.go:89] found id: ""
	I0731 21:31:16.455993 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.456005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:16.456029 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:16.456048 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:16.493234 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:16.493269 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:16.544931 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:16.544975 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:16.559513 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:16.559553 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:16.625127 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.625158 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:16.625176 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.200306 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:19.213303 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:19.213393 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:19.247139 1147424 cri.go:89] found id: ""
	I0731 21:31:19.247171 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.247179 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:19.247186 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:19.247245 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:19.282630 1147424 cri.go:89] found id: ""
	I0731 21:31:19.282659 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.282668 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:19.282674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:19.282740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:19.317287 1147424 cri.go:89] found id: ""
	I0731 21:31:19.317327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.317338 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:19.317345 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:19.317410 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:19.352680 1147424 cri.go:89] found id: ""
	I0731 21:31:19.352718 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.352738 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:19.352747 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:19.352820 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:19.385653 1147424 cri.go:89] found id: ""
	I0731 21:31:19.385697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.385709 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:19.385718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:19.385794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:19.425552 1147424 cri.go:89] found id: ""
	I0731 21:31:19.425582 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.425591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:19.425598 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:19.425654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:19.461717 1147424 cri.go:89] found id: ""
	I0731 21:31:19.461753 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.461766 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:19.461775 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:19.461852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:19.497504 1147424 cri.go:89] found id: ""
	I0731 21:31:19.497542 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.497554 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:19.497567 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:19.497592 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.571818 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:19.571867 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:19.611053 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:19.611091 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:19.662174 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:19.662220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:19.676489 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:19.676526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:19.750718 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.943653 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.443833 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.140073 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.639048 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.639213 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.764897 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.765013 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:22.251175 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:22.265094 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:22.265186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:22.298628 1147424 cri.go:89] found id: ""
	I0731 21:31:22.298665 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.298676 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:22.298684 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:22.298754 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:22.336851 1147424 cri.go:89] found id: ""
	I0731 21:31:22.336888 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.336900 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:22.336909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:22.336982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:22.373362 1147424 cri.go:89] found id: ""
	I0731 21:31:22.373397 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.373409 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:22.373417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:22.373498 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:22.409578 1147424 cri.go:89] found id: ""
	I0731 21:31:22.409606 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.409614 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:22.409621 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:22.409675 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:22.446427 1147424 cri.go:89] found id: ""
	I0731 21:31:22.446458 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.446469 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:22.446477 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:22.446547 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:22.480629 1147424 cri.go:89] found id: ""
	I0731 21:31:22.480679 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.480691 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:22.480700 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:22.480769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:22.515017 1147424 cri.go:89] found id: ""
	I0731 21:31:22.515058 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.515070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:22.515078 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:22.515151 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:22.552433 1147424 cri.go:89] found id: ""
	I0731 21:31:22.552462 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.552470 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:22.552480 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:22.552493 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:22.567822 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:22.567862 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:22.640554 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:22.640585 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:22.640603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:22.732714 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:22.732776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:22.790478 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:22.790515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:21.941836 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.945561 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.639434 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.640934 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.765376 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.264346 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.352413 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:25.364739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:25.364828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:25.398119 1147424 cri.go:89] found id: ""
	I0731 21:31:25.398158 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.398171 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:25.398184 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:25.398255 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:25.432874 1147424 cri.go:89] found id: ""
	I0731 21:31:25.432908 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.432919 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:25.432928 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:25.432986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:25.467669 1147424 cri.go:89] found id: ""
	I0731 21:31:25.467702 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.467711 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:25.467717 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:25.467783 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:25.502331 1147424 cri.go:89] found id: ""
	I0731 21:31:25.502364 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.502373 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:25.502379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:25.502434 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:25.535888 1147424 cri.go:89] found id: ""
	I0731 21:31:25.535917 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.535924 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:25.535931 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:25.535990 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:25.568398 1147424 cri.go:89] found id: ""
	I0731 21:31:25.568427 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.568443 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:25.568451 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:25.568554 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:25.602724 1147424 cri.go:89] found id: ""
	I0731 21:31:25.602751 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.602759 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:25.602766 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:25.602825 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:25.635990 1147424 cri.go:89] found id: ""
	I0731 21:31:25.636021 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.636032 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:25.636045 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:25.636063 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:25.687984 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:25.688030 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:25.702979 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:25.703010 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:25.768470 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:25.768498 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:25.768519 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:25.845432 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:25.845481 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.383725 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:28.397046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:28.397130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:28.436675 1147424 cri.go:89] found id: ""
	I0731 21:31:28.436707 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.436716 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:28.436723 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:28.436780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:28.474084 1147424 cri.go:89] found id: ""
	I0731 21:31:28.474114 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.474122 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:28.474129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:28.474186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:28.512448 1147424 cri.go:89] found id: ""
	I0731 21:31:28.512485 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.512496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:28.512505 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:28.512575 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:28.557548 1147424 cri.go:89] found id: ""
	I0731 21:31:28.557579 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.557591 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:28.557599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:28.557664 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:28.600492 1147424 cri.go:89] found id: ""
	I0731 21:31:28.600526 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.600545 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:28.600553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:28.600628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:28.645067 1147424 cri.go:89] found id: ""
	I0731 21:31:28.645093 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.645101 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:28.645107 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:28.645171 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:28.678391 1147424 cri.go:89] found id: ""
	I0731 21:31:28.678431 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.678444 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:28.678452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:28.678522 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:28.712230 1147424 cri.go:89] found id: ""
	I0731 21:31:28.712260 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.712268 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:28.712278 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:28.712297 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:28.779362 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:28.779389 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:28.779403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:28.861192 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:28.861243 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.900747 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:28.900781 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:28.953135 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:28.953183 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:26.442998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.443518 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.943322 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.139072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.638724 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.264991 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.764482 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:31.467806 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:31.481274 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:31.481345 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:31.516704 1147424 cri.go:89] found id: ""
	I0731 21:31:31.516741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.516754 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:31.516765 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:31.516824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:31.553299 1147424 cri.go:89] found id: ""
	I0731 21:31:31.553332 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.553341 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:31.553348 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:31.553402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:31.587834 1147424 cri.go:89] found id: ""
	I0731 21:31:31.587864 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.587874 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:31.587881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:31.587939 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:31.623164 1147424 cri.go:89] found id: ""
	I0731 21:31:31.623194 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.623203 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:31.623209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:31.623265 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:31.659118 1147424 cri.go:89] found id: ""
	I0731 21:31:31.659151 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.659158 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:31.659165 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:31.659219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:31.697260 1147424 cri.go:89] found id: ""
	I0731 21:31:31.697297 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.697308 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:31.697317 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:31.697375 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:31.732585 1147424 cri.go:89] found id: ""
	I0731 21:31:31.732623 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.732635 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:31.732644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:31.732698 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:31.770922 1147424 cri.go:89] found id: ""
	I0731 21:31:31.770952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.770964 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:31.770976 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:31.770992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:31.823747 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:31.823805 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:31.837367 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:31.837406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:31.912937 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:31.912958 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:31.912972 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:31.991008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:31.991061 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.528933 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:34.552722 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:34.552807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:34.587277 1147424 cri.go:89] found id: ""
	I0731 21:31:34.587315 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.587326 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:34.587337 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:34.587417 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:34.619919 1147424 cri.go:89] found id: ""
	I0731 21:31:34.619952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.619961 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:34.619968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:34.620033 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:34.654967 1147424 cri.go:89] found id: ""
	I0731 21:31:34.655000 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.655007 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:34.655014 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:34.655066 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:34.689092 1147424 cri.go:89] found id: ""
	I0731 21:31:34.689128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.689139 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:34.689147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:34.689217 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:34.725112 1147424 cri.go:89] found id: ""
	I0731 21:31:34.725145 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.725153 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:34.725159 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:34.725215 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:34.760377 1147424 cri.go:89] found id: ""
	I0731 21:31:34.760411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.760422 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:34.760430 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:34.760500 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:34.796413 1147424 cri.go:89] found id: ""
	I0731 21:31:34.796445 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.796460 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:34.796468 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:34.796540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:34.833243 1147424 cri.go:89] found id: ""
	I0731 21:31:34.833277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.833288 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:34.833309 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:34.833328 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:32.943881 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:35.442928 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.638850 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.640521 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.766140 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.264336 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.268433 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.911486 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:34.911552 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.952167 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:34.952200 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:35.010995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:35.011041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:35.025756 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:35.025795 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:35.110465 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:37.610914 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:37.623848 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:37.623935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:37.660355 1147424 cri.go:89] found id: ""
	I0731 21:31:37.660384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.660392 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:37.660398 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:37.660456 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:37.694935 1147424 cri.go:89] found id: ""
	I0731 21:31:37.694966 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.694975 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:37.694982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:37.695048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:37.729438 1147424 cri.go:89] found id: ""
	I0731 21:31:37.729472 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.729485 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:37.729493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:37.729570 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:37.766412 1147424 cri.go:89] found id: ""
	I0731 21:31:37.766440 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.766449 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:37.766457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:37.766519 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:37.803830 1147424 cri.go:89] found id: ""
	I0731 21:31:37.803865 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.803875 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:37.803884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:37.803956 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:37.838698 1147424 cri.go:89] found id: ""
	I0731 21:31:37.838730 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.838741 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:37.838749 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:37.838819 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:37.873274 1147424 cri.go:89] found id: ""
	I0731 21:31:37.873312 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.873324 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:37.873332 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:37.873404 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:37.907801 1147424 cri.go:89] found id: ""
	I0731 21:31:37.907835 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.907859 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:37.907870 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:37.907893 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:37.962192 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:37.962233 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:37.976530 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:37.976577 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:38.048551 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:38.048584 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:38.048603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:38.122957 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:38.123003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:37.942944 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.442336 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.139834 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.141085 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.640176 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.766169 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:43.767226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.663623 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:40.677119 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:40.677184 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:40.710893 1147424 cri.go:89] found id: ""
	I0731 21:31:40.710923 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.710932 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:40.710939 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:40.710996 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:40.746166 1147424 cri.go:89] found id: ""
	I0731 21:31:40.746203 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.746216 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:40.746223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:40.746296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:40.789323 1147424 cri.go:89] found id: ""
	I0731 21:31:40.789353 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.789362 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:40.789368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:40.789433 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:40.826731 1147424 cri.go:89] found id: ""
	I0731 21:31:40.826766 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.826775 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:40.826782 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:40.826843 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:40.865533 1147424 cri.go:89] found id: ""
	I0731 21:31:40.865562 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.865570 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:40.865576 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:40.865628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:40.900523 1147424 cri.go:89] found id: ""
	I0731 21:31:40.900555 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.900564 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:40.900571 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:40.900628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:40.934140 1147424 cri.go:89] found id: ""
	I0731 21:31:40.934172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.934181 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:40.934187 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:40.934252 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:40.969989 1147424 cri.go:89] found id: ""
	I0731 21:31:40.970033 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.970045 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:40.970058 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:40.970076 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:41.021416 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:41.021464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:41.035947 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:41.035978 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:41.102101 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:41.102126 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:41.102141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:41.182412 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:41.182457 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:43.727586 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:43.740633 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:43.740725 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:43.775305 1147424 cri.go:89] found id: ""
	I0731 21:31:43.775343 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.775354 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:43.775363 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:43.775426 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:43.813410 1147424 cri.go:89] found id: ""
	I0731 21:31:43.813441 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.813449 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:43.813455 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:43.813510 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:43.848924 1147424 cri.go:89] found id: ""
	I0731 21:31:43.848959 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.848971 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:43.848979 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:43.849048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:43.884911 1147424 cri.go:89] found id: ""
	I0731 21:31:43.884950 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.884962 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:43.884971 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:43.885041 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:43.918244 1147424 cri.go:89] found id: ""
	I0731 21:31:43.918277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.918286 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:43.918292 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:43.918348 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:43.952166 1147424 cri.go:89] found id: ""
	I0731 21:31:43.952200 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.952211 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:43.952220 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:43.952299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:43.985756 1147424 cri.go:89] found id: ""
	I0731 21:31:43.985790 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.985850 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:43.985863 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:43.985916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:44.020480 1147424 cri.go:89] found id: ""
	I0731 21:31:44.020516 1147424 logs.go:276] 0 containers: []
	W0731 21:31:44.020528 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:44.020542 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:44.020560 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:44.058344 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:44.058398 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:44.110703 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:44.110751 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:44.124735 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:44.124771 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:44.193412 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:44.193445 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:44.193463 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:42.442910 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.443829 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.140083 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.640177 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.265466 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.265667 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.775651 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:46.789288 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:46.789384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.822997 1147424 cri.go:89] found id: ""
	I0731 21:31:46.823032 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.823044 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:46.823053 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:46.823123 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:46.857000 1147424 cri.go:89] found id: ""
	I0731 21:31:46.857030 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.857039 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:46.857046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:46.857112 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:46.890362 1147424 cri.go:89] found id: ""
	I0731 21:31:46.890392 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.890404 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:46.890417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:46.890483 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:46.922819 1147424 cri.go:89] found id: ""
	I0731 21:31:46.922848 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.922864 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:46.922871 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:46.922935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:46.957333 1147424 cri.go:89] found id: ""
	I0731 21:31:46.957363 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.957371 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:46.957376 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:46.957444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:46.990795 1147424 cri.go:89] found id: ""
	I0731 21:31:46.990830 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.990840 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:46.990849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:46.990922 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:47.025144 1147424 cri.go:89] found id: ""
	I0731 21:31:47.025174 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.025185 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:47.025194 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:47.025263 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:47.062624 1147424 cri.go:89] found id: ""
	I0731 21:31:47.062658 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.062667 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:47.062677 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:47.062691 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:47.112698 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:47.112742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:47.127240 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:47.127276 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:47.195034 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:47.195062 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:47.195081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:47.277532 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:47.277574 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:49.814610 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:49.828213 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:49.828291 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.944364 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.442118 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.640243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.640580 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.764302 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:52.764441 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.861951 1147424 cri.go:89] found id: ""
	I0731 21:31:49.861982 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.861991 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:49.861998 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:49.862054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:49.898601 1147424 cri.go:89] found id: ""
	I0731 21:31:49.898630 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.898638 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:49.898644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:49.898711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:49.933615 1147424 cri.go:89] found id: ""
	I0731 21:31:49.933652 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.933665 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:49.933673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:49.933742 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:49.970356 1147424 cri.go:89] found id: ""
	I0731 21:31:49.970395 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.970416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:49.970425 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:49.970503 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:50.004186 1147424 cri.go:89] found id: ""
	I0731 21:31:50.004220 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.004232 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:50.004241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:50.004316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:50.037701 1147424 cri.go:89] found id: ""
	I0731 21:31:50.037741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.037753 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:50.037761 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:50.037834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:50.074358 1147424 cri.go:89] found id: ""
	I0731 21:31:50.074390 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.074399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:50.074409 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:50.074474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:50.109052 1147424 cri.go:89] found id: ""
	I0731 21:31:50.109083 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.109091 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:50.109101 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:50.109116 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:50.167891 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:50.167935 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:50.181132 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:50.181179 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:50.247835 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:50.247865 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:50.247882 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:50.328733 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:50.328779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:52.867344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:52.880275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:52.880355 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:52.913980 1147424 cri.go:89] found id: ""
	I0731 21:31:52.914015 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.914024 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:52.914030 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:52.914095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:52.947833 1147424 cri.go:89] found id: ""
	I0731 21:31:52.947866 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.947874 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:52.947880 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:52.947947 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:52.981345 1147424 cri.go:89] found id: ""
	I0731 21:31:52.981380 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.981393 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:52.981401 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:52.981470 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:53.016253 1147424 cri.go:89] found id: ""
	I0731 21:31:53.016283 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.016292 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:53.016299 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:53.016351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:53.049683 1147424 cri.go:89] found id: ""
	I0731 21:31:53.049716 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.049726 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:53.049734 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:53.049807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:53.082171 1147424 cri.go:89] found id: ""
	I0731 21:31:53.082217 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.082228 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:53.082237 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:53.082308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:53.114595 1147424 cri.go:89] found id: ""
	I0731 21:31:53.114640 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.114658 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:53.114667 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:53.114739 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:53.151612 1147424 cri.go:89] found id: ""
	I0731 21:31:53.151644 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.151672 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:53.151686 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:53.151702 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:53.203251 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:53.203293 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:53.219234 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:53.219272 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:53.290273 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:53.290292 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:53.290306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:53.367967 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:53.368023 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:51.443058 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.943272 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.141370 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.638859 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.264069 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.265286 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.909173 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:55.922278 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:55.922351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:55.959354 1147424 cri.go:89] found id: ""
	I0731 21:31:55.959389 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.959397 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:55.959403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:55.959467 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:55.998507 1147424 cri.go:89] found id: ""
	I0731 21:31:55.998544 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.998557 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:55.998566 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:55.998638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:56.034763 1147424 cri.go:89] found id: ""
	I0731 21:31:56.034811 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.034824 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:56.034833 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:56.034914 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:56.068685 1147424 cri.go:89] found id: ""
	I0731 21:31:56.068726 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.068737 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:56.068746 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:56.068833 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:56.105785 1147424 cri.go:89] found id: ""
	I0731 21:31:56.105824 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.105837 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:56.105845 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:56.105920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:56.142701 1147424 cri.go:89] found id: ""
	I0731 21:31:56.142732 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.142744 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:56.142752 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:56.142834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:56.177016 1147424 cri.go:89] found id: ""
	I0731 21:31:56.177064 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.177077 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:56.177089 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:56.177163 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:56.211989 1147424 cri.go:89] found id: ""
	I0731 21:31:56.212026 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.212038 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:56.212052 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:56.212070 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:56.263995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:56.264045 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:56.277535 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:56.277570 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:56.343150 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:56.343179 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:56.343199 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:56.425361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:56.425406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:58.965276 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:58.978115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:58.978190 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:59.011793 1147424 cri.go:89] found id: ""
	I0731 21:31:59.011829 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.011840 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:59.011849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:59.011921 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:59.048117 1147424 cri.go:89] found id: ""
	I0731 21:31:59.048153 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.048164 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:59.048172 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:59.048240 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:59.081955 1147424 cri.go:89] found id: ""
	I0731 21:31:59.081985 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.081996 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:59.082004 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:59.082072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:59.116269 1147424 cri.go:89] found id: ""
	I0731 21:31:59.116308 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.116321 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:59.116330 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:59.116396 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:59.152551 1147424 cri.go:89] found id: ""
	I0731 21:31:59.152580 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.152592 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:59.152599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:59.152669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:59.186708 1147424 cri.go:89] found id: ""
	I0731 21:31:59.186749 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.186758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:59.186764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:59.186830 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:59.223628 1147424 cri.go:89] found id: ""
	I0731 21:31:59.223681 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.223690 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:59.223698 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:59.223773 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:59.256867 1147424 cri.go:89] found id: ""
	I0731 21:31:59.256901 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.256913 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:59.256925 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:59.256944 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:59.307167 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:59.307209 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:59.320958 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:59.320992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:59.390776 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:59.390798 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:59.390813 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:59.467482 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:59.467534 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:56.445461 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:58.943434 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.639271 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:00.139778 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:59.764344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:01.765157 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:04.264512 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.005084 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:02.017546 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:02.017635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:02.053094 1147424 cri.go:89] found id: ""
	I0731 21:32:02.053135 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.053146 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:02.053155 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:02.053212 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:02.087483 1147424 cri.go:89] found id: ""
	I0731 21:32:02.087517 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.087535 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:02.087543 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:02.087600 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:02.123647 1147424 cri.go:89] found id: ""
	I0731 21:32:02.123685 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.123696 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:02.123706 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:02.123764 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:02.157798 1147424 cri.go:89] found id: ""
	I0731 21:32:02.157828 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.157837 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:02.157843 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:02.157899 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:02.190266 1147424 cri.go:89] found id: ""
	I0731 21:32:02.190297 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.190309 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:02.190318 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:02.190377 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:02.232507 1147424 cri.go:89] found id: ""
	I0731 21:32:02.232537 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.232546 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:02.232552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:02.232605 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:02.270105 1147424 cri.go:89] found id: ""
	I0731 21:32:02.270133 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.270144 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:02.270152 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:02.270221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:02.304599 1147424 cri.go:89] found id: ""
	I0731 21:32:02.304631 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.304642 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:02.304654 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:02.304671 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:02.356686 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:02.356727 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:02.370114 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:02.370147 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:02.437753 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:02.437778 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:02.437797 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:02.518085 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:02.518131 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:01.443142 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:03.943209 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.640855 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.141191 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:06.265050 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.265389 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.071289 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:05.084496 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:05.084579 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:05.124178 1147424 cri.go:89] found id: ""
	I0731 21:32:05.124208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.124218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:05.124224 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:05.124279 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:05.162119 1147424 cri.go:89] found id: ""
	I0731 21:32:05.162155 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.162167 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:05.162173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:05.162237 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:05.198445 1147424 cri.go:89] found id: ""
	I0731 21:32:05.198483 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.198496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:05.198504 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:05.198615 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:05.240678 1147424 cri.go:89] found id: ""
	I0731 21:32:05.240702 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.240711 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:05.240718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:05.240770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:05.276910 1147424 cri.go:89] found id: ""
	I0731 21:32:05.276942 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.276965 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:05.276974 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:05.277051 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:05.310130 1147424 cri.go:89] found id: ""
	I0731 21:32:05.310158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.310166 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:05.310173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:05.310227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:05.345144 1147424 cri.go:89] found id: ""
	I0731 21:32:05.345179 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.345191 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:05.345199 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:05.345267 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:05.386723 1147424 cri.go:89] found id: ""
	I0731 21:32:05.386766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.386778 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:05.386792 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:05.386809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:05.425852 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:05.425887 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:05.482401 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:05.482447 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:05.495888 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:05.495918 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:05.562121 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:05.562153 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:05.562174 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.140837 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:08.153503 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:08.153585 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:08.187113 1147424 cri.go:89] found id: ""
	I0731 21:32:08.187143 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.187155 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:08.187164 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:08.187226 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:08.219853 1147424 cri.go:89] found id: ""
	I0731 21:32:08.219888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.219898 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:08.219906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:08.219976 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:08.253817 1147424 cri.go:89] found id: ""
	I0731 21:32:08.253848 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.253857 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:08.253864 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:08.253930 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:08.307069 1147424 cri.go:89] found id: ""
	I0731 21:32:08.307096 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.307104 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:08.307111 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:08.307176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:08.349604 1147424 cri.go:89] found id: ""
	I0731 21:32:08.349632 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.349641 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:08.349648 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:08.349711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:08.382966 1147424 cri.go:89] found id: ""
	I0731 21:32:08.383000 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.383013 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:08.383022 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:08.383080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:08.416904 1147424 cri.go:89] found id: ""
	I0731 21:32:08.416938 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.416950 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:08.416958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:08.417021 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:08.451024 1147424 cri.go:89] found id: ""
	I0731 21:32:08.451061 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.451074 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:08.451087 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:08.451103 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.530394 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:08.530441 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:08.567554 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:08.567583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:08.616162 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:08.616208 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:08.629228 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:08.629264 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:08.700820 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:06.441762 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.443004 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.942870 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:07.638970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.139278 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.764866 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:13.265303 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:11.201091 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:11.213847 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:11.213920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:11.248925 1147424 cri.go:89] found id: ""
	I0731 21:32:11.248963 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.248974 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:11.248982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:11.249054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:11.286134 1147424 cri.go:89] found id: ""
	I0731 21:32:11.286168 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.286185 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:11.286193 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:11.286261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:11.321493 1147424 cri.go:89] found id: ""
	I0731 21:32:11.321524 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.321534 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:11.321542 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:11.321610 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:11.356679 1147424 cri.go:89] found id: ""
	I0731 21:32:11.356708 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.356724 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:11.356731 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:11.356788 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:11.390757 1147424 cri.go:89] found id: ""
	I0731 21:32:11.390785 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.390795 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:11.390802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:11.390868 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:11.424687 1147424 cri.go:89] found id: ""
	I0731 21:32:11.424724 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.424736 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:11.424745 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:11.424816 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:11.458542 1147424 cri.go:89] found id: ""
	I0731 21:32:11.458579 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.458590 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:11.458599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:11.458678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:11.490956 1147424 cri.go:89] found id: ""
	I0731 21:32:11.490999 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.491009 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:11.491020 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:11.491036 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:11.541013 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:11.541057 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:11.554729 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:11.554760 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:11.619828 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:11.619868 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:11.619894 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:11.697785 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:11.697837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.235153 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:14.247701 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:14.247770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:14.282802 1147424 cri.go:89] found id: ""
	I0731 21:32:14.282835 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.282846 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:14.282854 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:14.282926 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:14.316106 1147424 cri.go:89] found id: ""
	I0731 21:32:14.316158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.316168 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:14.316175 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:14.316235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:14.349319 1147424 cri.go:89] found id: ""
	I0731 21:32:14.349358 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.349370 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:14.349379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:14.349446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:14.385630 1147424 cri.go:89] found id: ""
	I0731 21:32:14.385665 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.385674 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:14.385681 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:14.385745 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:14.422054 1147424 cri.go:89] found id: ""
	I0731 21:32:14.422090 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.422104 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:14.422113 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:14.422176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:14.456170 1147424 cri.go:89] found id: ""
	I0731 21:32:14.456207 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.456216 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:14.456223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:14.456283 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:14.489571 1147424 cri.go:89] found id: ""
	I0731 21:32:14.489611 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.489622 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:14.489632 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:14.489709 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:14.524764 1147424 cri.go:89] found id: ""
	I0731 21:32:14.524803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.524814 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:14.524827 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:14.524843 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:14.598487 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:14.598511 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:14.598526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:14.675912 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:14.675954 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.722740 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:14.722778 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:14.780558 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:14.780604 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:13.441757 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.442992 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:12.140024 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:14.638468 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:16.639109 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.764963 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.265010 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:17.300221 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:17.313242 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:17.313309 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:17.349244 1147424 cri.go:89] found id: ""
	I0731 21:32:17.349276 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.349284 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:17.349293 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:17.349364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:17.382158 1147424 cri.go:89] found id: ""
	I0731 21:32:17.382188 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.382196 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:17.382203 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:17.382276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:17.416250 1147424 cri.go:89] found id: ""
	I0731 21:32:17.416283 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.416295 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:17.416304 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:17.416363 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:17.449192 1147424 cri.go:89] found id: ""
	I0731 21:32:17.449229 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.449240 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:17.449249 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:17.449316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:17.482189 1147424 cri.go:89] found id: ""
	I0731 21:32:17.482223 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.482235 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:17.482244 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:17.482308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:17.516284 1147424 cri.go:89] found id: ""
	I0731 21:32:17.516312 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.516320 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:17.516327 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:17.516380 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:17.550025 1147424 cri.go:89] found id: ""
	I0731 21:32:17.550059 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.550070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:17.550077 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:17.550142 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:17.582378 1147424 cri.go:89] found id: ""
	I0731 21:32:17.582411 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.582424 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:17.582488 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:17.582513 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:17.635593 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:17.635640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:17.649694 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:17.649734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:17.716275 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:17.716301 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:17.716316 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:17.800261 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:17.800327 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:17.942859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:19.943179 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.639313 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.639947 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.265670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:22.764461 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.339222 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:20.353494 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:20.353574 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:20.387397 1147424 cri.go:89] found id: ""
	I0731 21:32:20.387432 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.387441 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:20.387449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:20.387534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:20.421038 1147424 cri.go:89] found id: ""
	I0731 21:32:20.421074 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.421082 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:20.421088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:20.421200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:20.461171 1147424 cri.go:89] found id: ""
	I0731 21:32:20.461208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.461221 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:20.461229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:20.461297 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:20.529655 1147424 cri.go:89] found id: ""
	I0731 21:32:20.529692 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.529704 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:20.529712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:20.529779 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:20.584293 1147424 cri.go:89] found id: ""
	I0731 21:32:20.584327 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.584337 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:20.584344 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:20.584399 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:20.617177 1147424 cri.go:89] found id: ""
	I0731 21:32:20.617209 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.617220 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:20.617226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:20.617282 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:20.657058 1147424 cri.go:89] found id: ""
	I0731 21:32:20.657094 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.657104 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:20.657112 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:20.657181 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:20.689987 1147424 cri.go:89] found id: ""
	I0731 21:32:20.690016 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.690026 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:20.690038 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:20.690058 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:20.702274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:20.702310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:20.766054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:20.766088 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:20.766106 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:20.850776 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:20.850823 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:20.888735 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:20.888766 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.440658 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:23.453529 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:23.453616 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:23.487210 1147424 cri.go:89] found id: ""
	I0731 21:32:23.487249 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.487263 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:23.487271 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:23.487338 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:23.520656 1147424 cri.go:89] found id: ""
	I0731 21:32:23.520697 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.520709 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:23.520718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:23.520794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:23.557952 1147424 cri.go:89] found id: ""
	I0731 21:32:23.557982 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.557991 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:23.557999 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:23.558052 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:23.591428 1147424 cri.go:89] found id: ""
	I0731 21:32:23.591458 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.591466 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:23.591473 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:23.591537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:23.624978 1147424 cri.go:89] found id: ""
	I0731 21:32:23.625009 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.625019 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:23.625026 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:23.625080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:23.659424 1147424 cri.go:89] found id: ""
	I0731 21:32:23.659460 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.659473 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:23.659482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:23.659557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:23.696695 1147424 cri.go:89] found id: ""
	I0731 21:32:23.696733 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.696745 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:23.696753 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:23.696818 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:23.734067 1147424 cri.go:89] found id: ""
	I0731 21:32:23.734097 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.734106 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:23.734116 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:23.734130 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.787432 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:23.787476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:23.801116 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:23.801154 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:23.867801 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:23.867840 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:23.867859 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:23.952393 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:23.952435 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:22.442859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:24.943043 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:23.139590 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.140770 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.264790 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.763670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:26.490759 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:26.503050 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:26.503120 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:26.536191 1147424 cri.go:89] found id: ""
	I0731 21:32:26.536239 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.536251 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:26.536260 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:26.536330 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:26.571038 1147424 cri.go:89] found id: ""
	I0731 21:32:26.571075 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.571088 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:26.571096 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:26.571164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:26.605295 1147424 cri.go:89] found id: ""
	I0731 21:32:26.605333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.605346 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:26.605355 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:26.605422 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:26.644430 1147424 cri.go:89] found id: ""
	I0731 21:32:26.644472 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.644482 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:26.644489 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:26.644553 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:26.675985 1147424 cri.go:89] found id: ""
	I0731 21:32:26.676020 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.676033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:26.676041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:26.676128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:26.707738 1147424 cri.go:89] found id: ""
	I0731 21:32:26.707766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.707780 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:26.707787 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:26.707850 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:26.743969 1147424 cri.go:89] found id: ""
	I0731 21:32:26.743998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.744007 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:26.744013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:26.744067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:26.782301 1147424 cri.go:89] found id: ""
	I0731 21:32:26.782333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.782346 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:26.782361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:26.782377 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:26.818548 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:26.818580 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.870586 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:26.870632 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:26.883944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:26.883983 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:26.951603 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:26.951630 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:26.951648 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:29.527796 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:29.540627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:29.540862 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:29.575513 1147424 cri.go:89] found id: ""
	I0731 21:32:29.575544 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.575553 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:29.575559 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:29.575627 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:29.607395 1147424 cri.go:89] found id: ""
	I0731 21:32:29.607425 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.607434 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:29.607440 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:29.607505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:29.641509 1147424 cri.go:89] found id: ""
	I0731 21:32:29.641539 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.641548 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:29.641553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:29.641604 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:29.673166 1147424 cri.go:89] found id: ""
	I0731 21:32:29.673197 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.673207 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:29.673215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:29.673285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:29.703698 1147424 cri.go:89] found id: ""
	I0731 21:32:29.703744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.703752 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:29.703759 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:29.703821 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:29.738704 1147424 cri.go:89] found id: ""
	I0731 21:32:29.738746 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.738758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:29.738767 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:29.738858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:29.771359 1147424 cri.go:89] found id: ""
	I0731 21:32:29.771388 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.771399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:29.771407 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:29.771474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:29.806579 1147424 cri.go:89] found id: ""
	I0731 21:32:29.806614 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.806625 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:29.806635 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:29.806649 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.943079 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.442599 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.638623 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.639949 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.764393 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:31.764649 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.764888 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.857957 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:29.857994 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:29.871348 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:29.871387 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:29.942833 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:29.942864 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:29.942880 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:30.027254 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:30.027306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:32.565077 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:32.577796 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:32.577878 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:32.611725 1147424 cri.go:89] found id: ""
	I0731 21:32:32.611762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.611774 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:32.611783 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:32.611859 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:32.647901 1147424 cri.go:89] found id: ""
	I0731 21:32:32.647939 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.647951 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:32.647959 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:32.648018 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:32.681042 1147424 cri.go:89] found id: ""
	I0731 21:32:32.681073 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.681084 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:32.681091 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:32.681162 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:32.716141 1147424 cri.go:89] found id: ""
	I0731 21:32:32.716173 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.716182 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:32.716188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:32.716242 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:32.753207 1147424 cri.go:89] found id: ""
	I0731 21:32:32.753236 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.753244 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:32.753250 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:32.753301 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:32.787591 1147424 cri.go:89] found id: ""
	I0731 21:32:32.787619 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.787628 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:32.787635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:32.787717 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:32.822430 1147424 cri.go:89] found id: ""
	I0731 21:32:32.822464 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.822476 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:32.822484 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:32.822544 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:32.854566 1147424 cri.go:89] found id: ""
	I0731 21:32:32.854600 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.854609 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:32.854621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:32.854636 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:32.905256 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:32.905310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:32.918575 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:32.918607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:32.981644 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:32.981669 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:32.981685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:33.062767 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:33.062814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:31.443380 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.942793 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.943502 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:32.139483 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:34.140185 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.638720 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.264481 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.265008 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.599598 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:35.612328 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:35.612403 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:35.647395 1147424 cri.go:89] found id: ""
	I0731 21:32:35.647428 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.647439 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:35.647448 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:35.647514 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:35.682339 1147424 cri.go:89] found id: ""
	I0731 21:32:35.682370 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.682378 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:35.682384 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:35.682440 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:35.721727 1147424 cri.go:89] found id: ""
	I0731 21:32:35.721762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.721775 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:35.721784 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:35.721866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:35.754648 1147424 cri.go:89] found id: ""
	I0731 21:32:35.754678 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.754688 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:35.754697 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:35.754761 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:35.787880 1147424 cri.go:89] found id: ""
	I0731 21:32:35.787910 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.787922 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:35.787930 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:35.788004 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:35.822619 1147424 cri.go:89] found id: ""
	I0731 21:32:35.822656 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.822668 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:35.822677 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:35.822743 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:35.856160 1147424 cri.go:89] found id: ""
	I0731 21:32:35.856198 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.856210 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:35.856219 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:35.856284 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:35.888842 1147424 cri.go:89] found id: ""
	I0731 21:32:35.888881 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.888893 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:35.888906 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:35.888924 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:35.956296 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:35.956323 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:35.956342 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:36.039485 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:36.039531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:36.081202 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:36.081247 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:36.130789 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:36.130831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:38.647723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:38.660334 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:38.660405 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:38.696782 1147424 cri.go:89] found id: ""
	I0731 21:32:38.696813 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.696822 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:38.696828 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:38.696887 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:38.731835 1147424 cri.go:89] found id: ""
	I0731 21:32:38.731874 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.731887 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:38.731895 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:38.731969 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:38.768894 1147424 cri.go:89] found id: ""
	I0731 21:32:38.768924 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.768935 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:38.768943 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:38.769012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:38.802331 1147424 cri.go:89] found id: ""
	I0731 21:32:38.802361 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.802370 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:38.802377 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:38.802430 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:38.835822 1147424 cri.go:89] found id: ""
	I0731 21:32:38.835852 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.835864 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:38.835881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:38.835940 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:38.869104 1147424 cri.go:89] found id: ""
	I0731 21:32:38.869141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.869153 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:38.869162 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:38.869234 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:38.907732 1147424 cri.go:89] found id: ""
	I0731 21:32:38.907769 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.907781 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:38.907789 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:38.907858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:38.942961 1147424 cri.go:89] found id: ""
	I0731 21:32:38.942994 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.943005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:38.943017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:38.943032 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:38.997537 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:38.997584 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:39.011711 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:39.011745 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:39.082834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:39.082861 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:39.082878 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:39.168702 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:39.168758 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:38.442196 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.943085 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.639586 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.140158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.764887 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.265118 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.706713 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:41.720209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:41.720298 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:41.752969 1147424 cri.go:89] found id: ""
	I0731 21:32:41.753005 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.753016 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:41.753025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:41.753095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:41.786502 1147424 cri.go:89] found id: ""
	I0731 21:32:41.786542 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.786555 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:41.786564 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:41.786635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:41.819958 1147424 cri.go:89] found id: ""
	I0731 21:32:41.819989 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.820000 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:41.820008 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:41.820073 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:41.855104 1147424 cri.go:89] found id: ""
	I0731 21:32:41.855141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.855153 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:41.855161 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:41.855228 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:41.889375 1147424 cri.go:89] found id: ""
	I0731 21:32:41.889413 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.889423 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:41.889429 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:41.889505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:41.925172 1147424 cri.go:89] found id: ""
	I0731 21:32:41.925199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.925208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:41.925215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:41.925278 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:41.960951 1147424 cri.go:89] found id: ""
	I0731 21:32:41.960995 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.961009 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:41.961017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:41.961086 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:41.996458 1147424 cri.go:89] found id: ""
	I0731 21:32:41.996493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.996506 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:41.996519 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:41.996537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:42.048841 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:42.048889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:42.062235 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:42.062271 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:42.131510 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:42.131536 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:42.131551 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:42.216993 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:42.217035 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:44.756236 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:44.769719 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:44.769800 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:44.808963 1147424 cri.go:89] found id: ""
	I0731 21:32:44.808998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.809009 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:44.809017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:44.809095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:44.843163 1147424 cri.go:89] found id: ""
	I0731 21:32:44.843199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.843212 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:44.843225 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:44.843287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:42.943536 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.443141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.140264 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.140607 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.764757 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.765226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:44.877440 1147424 cri.go:89] found id: ""
	I0731 21:32:44.877468 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.877477 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:44.877483 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:44.877537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:44.911877 1147424 cri.go:89] found id: ""
	I0731 21:32:44.911906 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.911915 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:44.911922 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:44.911974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:44.945516 1147424 cri.go:89] found id: ""
	I0731 21:32:44.945547 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.945558 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:44.945565 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:44.945634 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:44.983858 1147424 cri.go:89] found id: ""
	I0731 21:32:44.983890 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.983898 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:44.983906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:44.983981 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:45.017030 1147424 cri.go:89] found id: ""
	I0731 21:32:45.017064 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.017075 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:45.017084 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:45.017154 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:45.051005 1147424 cri.go:89] found id: ""
	I0731 21:32:45.051040 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.051053 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:45.051064 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:45.051077 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:45.100602 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:45.100646 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:45.113843 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:45.113891 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:45.187725 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:45.187760 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:45.187779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:45.273549 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:45.273588 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:47.813567 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:47.826674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:47.826762 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:47.863746 1147424 cri.go:89] found id: ""
	I0731 21:32:47.863781 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.863789 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:47.863797 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:47.863860 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:47.901125 1147424 cri.go:89] found id: ""
	I0731 21:32:47.901158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.901169 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:47.901177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:47.901247 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:47.936510 1147424 cri.go:89] found id: ""
	I0731 21:32:47.936543 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.936553 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:47.936560 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:47.936618 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:47.972712 1147424 cri.go:89] found id: ""
	I0731 21:32:47.972744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.972754 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:47.972764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:47.972828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:48.007785 1147424 cri.go:89] found id: ""
	I0731 21:32:48.007818 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.007831 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:48.007839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:48.007907 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:48.045821 1147424 cri.go:89] found id: ""
	I0731 21:32:48.045851 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.045863 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:48.045872 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:48.045945 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:48.083790 1147424 cri.go:89] found id: ""
	I0731 21:32:48.083823 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.083832 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:48.083839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:48.083903 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:48.122430 1147424 cri.go:89] found id: ""
	I0731 21:32:48.122465 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.122477 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:48.122490 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:48.122505 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:48.200081 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:48.200140 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:48.240500 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:48.240537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:48.292336 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:48.292393 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:48.305398 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:48.305431 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:48.381327 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:47.943158 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.945740 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.638897 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.640039 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.269263 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.765262 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.881554 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:50.894655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:50.894740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:50.928819 1147424 cri.go:89] found id: ""
	I0731 21:32:50.928861 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.928873 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:50.928882 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:50.928950 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:50.962856 1147424 cri.go:89] found id: ""
	I0731 21:32:50.962897 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.962908 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:50.962917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:50.962980 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:50.995765 1147424 cri.go:89] found id: ""
	I0731 21:32:50.995803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.995815 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:50.995823 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:50.995892 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:51.034418 1147424 cri.go:89] found id: ""
	I0731 21:32:51.034454 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.034467 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:51.034476 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:51.034534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:51.070687 1147424 cri.go:89] found id: ""
	I0731 21:32:51.070723 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.070732 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:51.070739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:51.070828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:51.106934 1147424 cri.go:89] found id: ""
	I0731 21:32:51.106959 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.106966 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:51.106973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:51.107026 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:51.143489 1147424 cri.go:89] found id: ""
	I0731 21:32:51.143513 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.143522 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:51.143530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:51.143591 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:51.180778 1147424 cri.go:89] found id: ""
	I0731 21:32:51.180806 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.180816 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:51.180827 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:51.180842 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:51.194695 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:51.194734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:51.262172 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:51.262200 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:51.262220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:51.344678 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:51.344719 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:51.383624 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:51.383659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:53.936339 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:53.950362 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:53.950446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:53.984346 1147424 cri.go:89] found id: ""
	I0731 21:32:53.984376 1147424 logs.go:276] 0 containers: []
	W0731 21:32:53.984391 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:53.984403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:53.984481 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:54.019937 1147424 cri.go:89] found id: ""
	I0731 21:32:54.019973 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.019986 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:54.019994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:54.020070 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:54.056068 1147424 cri.go:89] found id: ""
	I0731 21:32:54.056120 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.056133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:54.056142 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:54.056221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:54.094375 1147424 cri.go:89] found id: ""
	I0731 21:32:54.094407 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.094416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:54.094422 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:54.094486 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:54.130326 1147424 cri.go:89] found id: ""
	I0731 21:32:54.130362 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.130374 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:54.130383 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:54.130444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:54.168190 1147424 cri.go:89] found id: ""
	I0731 21:32:54.168228 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.168239 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:54.168248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:54.168329 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:54.201946 1147424 cri.go:89] found id: ""
	I0731 21:32:54.201979 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.201988 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:54.201994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:54.202055 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:54.233852 1147424 cri.go:89] found id: ""
	I0731 21:32:54.233888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.233896 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:54.233907 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:54.233922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:54.287620 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:54.287664 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:54.309984 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:54.310019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:54.382751 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:54.382774 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:54.382789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:54.460042 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:54.460105 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:52.443844 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.943970 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.140449 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.141072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:56.639439 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:55.264301 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.265478 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.002945 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:57.015673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:57.015763 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:57.049464 1147424 cri.go:89] found id: ""
	I0731 21:32:57.049493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.049502 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:57.049509 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:57.049561 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:57.083326 1147424 cri.go:89] found id: ""
	I0731 21:32:57.083356 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.083365 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:57.083371 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:57.083431 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:57.115103 1147424 cri.go:89] found id: ""
	I0731 21:32:57.115132 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.115141 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:57.115147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:57.115200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:57.153178 1147424 cri.go:89] found id: ""
	I0731 21:32:57.153214 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.153226 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:57.153234 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:57.153310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:57.187940 1147424 cri.go:89] found id: ""
	I0731 21:32:57.187980 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.187992 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:57.188001 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:57.188072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:57.221825 1147424 cri.go:89] found id: ""
	I0731 21:32:57.221858 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.221868 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:57.221884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:57.221948 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:57.255087 1147424 cri.go:89] found id: ""
	I0731 21:32:57.255115 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.255128 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:57.255137 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:57.255207 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:57.290095 1147424 cri.go:89] found id: ""
	I0731 21:32:57.290131 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.290143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:57.290157 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:57.290175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:57.343777 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:57.343819 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:57.356944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:57.356981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:57.431220 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:57.431248 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:57.431267 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:57.518079 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:57.518123 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:57.442671 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.942490 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:58.639801 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.139266 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.764738 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.765367 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:04.265447 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:00.056208 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:00.069424 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:00.069511 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:00.105855 1147424 cri.go:89] found id: ""
	I0731 21:33:00.105891 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.105902 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:00.105909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:00.105984 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:00.143079 1147424 cri.go:89] found id: ""
	I0731 21:33:00.143109 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.143120 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:00.143128 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:00.143195 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:00.178114 1147424 cri.go:89] found id: ""
	I0731 21:33:00.178150 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.178162 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:00.178171 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:00.178235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:00.212518 1147424 cri.go:89] found id: ""
	I0731 21:33:00.212547 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.212556 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:00.212562 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:00.212626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:00.246653 1147424 cri.go:89] found id: ""
	I0731 21:33:00.246683 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.246693 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:00.246702 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:00.246795 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:00.280163 1147424 cri.go:89] found id: ""
	I0731 21:33:00.280196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.280208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:00.280216 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:00.280285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:00.313593 1147424 cri.go:89] found id: ""
	I0731 21:33:00.313622 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.313631 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:00.313637 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:00.313691 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:00.347809 1147424 cri.go:89] found id: ""
	I0731 21:33:00.347838 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.347846 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:00.347858 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:00.347870 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:00.360481 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:00.360515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:00.433834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:00.433855 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:00.433869 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:00.513679 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:00.513721 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:00.551415 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:00.551466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.101928 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:03.114183 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:03.114262 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:03.152397 1147424 cri.go:89] found id: ""
	I0731 21:33:03.152427 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.152442 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:03.152449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:03.152505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:03.186595 1147424 cri.go:89] found id: ""
	I0731 21:33:03.186626 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.186640 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:03.186647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:03.186700 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:03.219085 1147424 cri.go:89] found id: ""
	I0731 21:33:03.219116 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.219126 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:03.219135 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:03.219201 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:03.251541 1147424 cri.go:89] found id: ""
	I0731 21:33:03.251573 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.251583 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:03.251592 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:03.251660 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:03.287880 1147424 cri.go:89] found id: ""
	I0731 21:33:03.287911 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.287920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:03.287927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:03.287992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:03.320317 1147424 cri.go:89] found id: ""
	I0731 21:33:03.320352 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.320361 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:03.320367 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:03.320423 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:03.355185 1147424 cri.go:89] found id: ""
	I0731 21:33:03.355213 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.355222 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:03.355228 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:03.355281 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:03.389900 1147424 cri.go:89] found id: ""
	I0731 21:33:03.389933 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.389941 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:03.389951 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:03.389985 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:03.427299 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:03.427331 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.480994 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:03.481037 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:03.494372 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:03.494403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:03.565542 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:03.565568 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:03.565583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:01.942941 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.943391 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.140871 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:05.141254 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.764762 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.264188 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.146397 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:06.159705 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:06.159791 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:06.195594 1147424 cri.go:89] found id: ""
	I0731 21:33:06.195628 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.195640 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:06.195649 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:06.195726 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:06.230163 1147424 cri.go:89] found id: ""
	I0731 21:33:06.230216 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.230229 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:06.230239 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:06.230313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:06.266937 1147424 cri.go:89] found id: ""
	I0731 21:33:06.266968 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.266979 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:06.266986 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:06.267048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:06.299791 1147424 cri.go:89] found id: ""
	I0731 21:33:06.299828 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.299838 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:06.299849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:06.299906 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:06.333861 1147424 cri.go:89] found id: ""
	I0731 21:33:06.333900 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.333912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:06.333920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:06.333991 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:06.366156 1147424 cri.go:89] found id: ""
	I0731 21:33:06.366196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.366208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:06.366217 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:06.366292 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:06.400567 1147424 cri.go:89] found id: ""
	I0731 21:33:06.400598 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.400607 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:06.400613 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:06.400665 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:06.443745 1147424 cri.go:89] found id: ""
	I0731 21:33:06.443771 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.443782 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:06.443794 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:06.443809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:06.530140 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:06.530189 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.570842 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:06.570883 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:06.621760 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:06.621800 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:06.636562 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:06.636602 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:06.702451 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.203607 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:09.215590 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:09.215678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:09.253063 1147424 cri.go:89] found id: ""
	I0731 21:33:09.253092 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.253101 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:09.253108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:09.253159 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:09.287000 1147424 cri.go:89] found id: ""
	I0731 21:33:09.287036 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.287051 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:09.287060 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:09.287117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:09.321173 1147424 cri.go:89] found id: ""
	I0731 21:33:09.321211 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.321223 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:09.321232 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:09.321287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:09.356860 1147424 cri.go:89] found id: ""
	I0731 21:33:09.356896 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.356908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:09.356918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:09.356979 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:09.390469 1147424 cri.go:89] found id: ""
	I0731 21:33:09.390509 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.390520 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:09.390528 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:09.390601 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:09.426265 1147424 cri.go:89] found id: ""
	I0731 21:33:09.426295 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.426304 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:09.426311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:09.426376 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:09.460197 1147424 cri.go:89] found id: ""
	I0731 21:33:09.460234 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.460246 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:09.460254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:09.460313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:09.492708 1147424 cri.go:89] found id: ""
	I0731 21:33:09.492737 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.492745 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:09.492757 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:09.492769 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:09.543768 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:09.543814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:09.557496 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:09.557531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:09.622956 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.622994 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:09.623012 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:09.700157 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:09.700202 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.443888 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:08.942866 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:07.638676 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.639158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.639719 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.264932 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.763994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:12.238767 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:12.258742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:12.258829 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:12.319452 1147424 cri.go:89] found id: ""
	I0731 21:33:12.319501 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.319514 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:12.319523 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:12.319596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:12.353740 1147424 cri.go:89] found id: ""
	I0731 21:33:12.353777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.353789 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:12.353798 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:12.353872 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:12.387735 1147424 cri.go:89] found id: ""
	I0731 21:33:12.387777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.387790 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:12.387799 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:12.387864 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:12.420145 1147424 cri.go:89] found id: ""
	I0731 21:33:12.420184 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.420196 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:12.420204 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:12.420261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:12.454861 1147424 cri.go:89] found id: ""
	I0731 21:33:12.454899 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.454912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:12.454920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:12.454993 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:12.487910 1147424 cri.go:89] found id: ""
	I0731 21:33:12.487938 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.487946 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:12.487954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:12.488007 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:12.524634 1147424 cri.go:89] found id: ""
	I0731 21:33:12.524663 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.524672 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:12.524678 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:12.524747 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:12.557542 1147424 cri.go:89] found id: ""
	I0731 21:33:12.557572 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.557581 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:12.557592 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:12.557605 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:12.638725 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:12.638767 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:12.675009 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:12.675041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:12.725508 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:12.725556 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:12.739281 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:12.739315 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:12.809186 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:11.443163 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.942775 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.944913 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:14.140466 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:16.639513 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.764068 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:17.764557 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.310278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:15.323392 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:15.323489 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:15.356737 1147424 cri.go:89] found id: ""
	I0731 21:33:15.356768 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.356779 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:15.356794 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:15.356870 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:15.389979 1147424 cri.go:89] found id: ""
	I0731 21:33:15.390018 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.390027 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:15.390033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:15.390097 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:15.422777 1147424 cri.go:89] found id: ""
	I0731 21:33:15.422810 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.422818 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:15.422825 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:15.422880 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:15.457962 1147424 cri.go:89] found id: ""
	I0731 21:33:15.458000 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.458012 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:15.458021 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:15.458088 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:15.495495 1147424 cri.go:89] found id: ""
	I0731 21:33:15.495528 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.495539 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:15.495552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:15.495611 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:15.528671 1147424 cri.go:89] found id: ""
	I0731 21:33:15.528700 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.528709 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:15.528715 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:15.528782 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:15.562579 1147424 cri.go:89] found id: ""
	I0731 21:33:15.562609 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.562617 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:15.562623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:15.562688 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:15.597326 1147424 cri.go:89] found id: ""
	I0731 21:33:15.597362 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.597374 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:15.597387 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:15.597406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:15.611017 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:15.611049 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:15.679729 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:15.679756 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:15.679776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:15.763719 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:15.763764 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:15.801974 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:15.802003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.350340 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:18.362952 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:18.363030 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:18.396153 1147424 cri.go:89] found id: ""
	I0731 21:33:18.396207 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.396218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:18.396227 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:18.396300 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:18.429261 1147424 cri.go:89] found id: ""
	I0731 21:33:18.429291 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.429302 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:18.429311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:18.429386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:18.462056 1147424 cri.go:89] found id: ""
	I0731 21:33:18.462093 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.462105 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:18.462115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:18.462189 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:18.494847 1147424 cri.go:89] found id: ""
	I0731 21:33:18.494887 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.494900 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:18.494908 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:18.494974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:18.527982 1147424 cri.go:89] found id: ""
	I0731 21:33:18.528020 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.528033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:18.528041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:18.528137 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:18.562114 1147424 cri.go:89] found id: ""
	I0731 21:33:18.562148 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.562159 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:18.562168 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:18.562227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:18.600226 1147424 cri.go:89] found id: ""
	I0731 21:33:18.600256 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.600267 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:18.600275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:18.600346 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:18.635899 1147424 cri.go:89] found id: ""
	I0731 21:33:18.635935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.635947 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:18.635960 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:18.635976 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.687338 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:18.687380 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:18.700274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:18.700308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:18.772852 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:18.772882 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:18.772900 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:18.854876 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:18.854919 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:18.442684 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:20.942998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.139878 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.139917 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.764588 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.765547 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:22.759208 1147232 pod_ready.go:81] duration metric: took 4m0.00082409s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	E0731 21:33:22.759249 1147232 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:33:22.759276 1147232 pod_ready.go:38] duration metric: took 4m11.578718686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:33:22.759313 1147232 kubeadm.go:597] duration metric: took 4m19.399292481s to restartPrimaryControlPlane
	W0731 21:33:22.759429 1147232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:22.759478 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:21.392589 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:21.405646 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:21.405767 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:21.441055 1147424 cri.go:89] found id: ""
	I0731 21:33:21.441088 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.441100 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:21.441108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:21.441173 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:21.474545 1147424 cri.go:89] found id: ""
	I0731 21:33:21.474583 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.474593 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:21.474599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:21.474654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:21.506004 1147424 cri.go:89] found id: ""
	I0731 21:33:21.506032 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.506041 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:21.506047 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:21.506115 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:21.539842 1147424 cri.go:89] found id: ""
	I0731 21:33:21.539880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.539893 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:21.539902 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:21.539966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:21.573913 1147424 cri.go:89] found id: ""
	I0731 21:33:21.573943 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.573951 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:21.573958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:21.574012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:21.608677 1147424 cri.go:89] found id: ""
	I0731 21:33:21.608715 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.608727 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:21.608736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:21.608811 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:21.642032 1147424 cri.go:89] found id: ""
	I0731 21:33:21.642063 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.642073 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:21.642082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:21.642146 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:21.676279 1147424 cri.go:89] found id: ""
	I0731 21:33:21.676312 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.676322 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:21.676332 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:21.676346 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:21.688928 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:21.688981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:21.757596 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:21.757620 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:21.757637 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:21.836301 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:21.836350 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:21.873553 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:21.873594 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.427756 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:24.440917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:24.440998 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:24.475902 1147424 cri.go:89] found id: ""
	I0731 21:33:24.475935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.475946 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:24.475954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:24.476031 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:24.509078 1147424 cri.go:89] found id: ""
	I0731 21:33:24.509115 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.509128 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:24.509136 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:24.509205 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:24.542466 1147424 cri.go:89] found id: ""
	I0731 21:33:24.542506 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.542518 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:24.542527 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:24.542589 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:24.579457 1147424 cri.go:89] found id: ""
	I0731 21:33:24.579496 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.579515 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:24.579524 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:24.579596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:24.623843 1147424 cri.go:89] found id: ""
	I0731 21:33:24.623880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.623891 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:24.623899 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:24.623971 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:24.661401 1147424 cri.go:89] found id: ""
	I0731 21:33:24.661437 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.661448 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:24.661457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:24.661526 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:24.694521 1147424 cri.go:89] found id: ""
	I0731 21:33:24.694551 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.694559 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:24.694567 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:24.694657 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:24.730530 1147424 cri.go:89] found id: ""
	I0731 21:33:24.730566 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.730578 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:24.730591 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:24.730607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.801836 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:24.801890 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:24.817753 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:24.817803 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:33:23.444464 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.942484 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:23.140282 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.638870 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	W0731 21:33:24.901125 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:24.901154 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:24.901170 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:24.984008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:24.984054 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:27.533575 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:27.546174 1147424 kubeadm.go:597] duration metric: took 4m1.98040234s to restartPrimaryControlPlane
	W0731 21:33:27.546264 1147424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:27.546291 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:28.848116 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.301779163s)
	I0731 21:33:28.848201 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:28.862706 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:28.872753 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:28.882437 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:28.882467 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:28.882527 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:28.892810 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:28.892893 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:28.901944 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:28.911008 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:28.911089 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:28.920446 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.929557 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:28.929627 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.939095 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:28.948405 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:28.948478 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:28.958084 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:29.033876 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:33:29.033969 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:29.180061 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:29.180208 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:29.180304 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:29.352063 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:29.354698 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:29.354847 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:29.354944 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:29.355065 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:29.355151 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:29.355244 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:29.355344 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:29.355454 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:29.355562 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:29.355675 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:29.355800 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:29.355855 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:29.355906 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:29.657622 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:29.951029 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:30.025514 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:30.502515 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:30.518575 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:30.520148 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:30.520332 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:30.670223 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:27.948560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.442457 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:28.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.139394 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.672807 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:33:30.672945 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:30.681152 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:30.682190 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:30.683416 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:30.688543 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:33:32.942316 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.443021 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:32.639784 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.139844 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.945781 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.442632 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.639625 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.139364 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.942420 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.942739 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.139763 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.639285 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:46.943777 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.442396 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:47.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.139244 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:51.139970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.946266 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.186759545s)
	I0731 21:33:53.946372 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:53.960849 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:53.971957 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:53.981956 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:53.981997 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:53.982061 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:53.991700 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:53.991794 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:54.001558 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:54.010863 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:54.010939 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:54.021132 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.032655 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:54.032745 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.042684 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:54.052522 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:54.052591 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:54.062401 1147232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:54.110034 1147232 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:33:54.110111 1147232 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:54.241728 1147232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:54.241910 1147232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:54.242057 1147232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:54.453017 1147232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:54.454705 1147232 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:54.454822 1147232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:54.459233 1147232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:54.459344 1147232 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:54.459427 1147232 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:54.459525 1147232 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:54.459612 1147232 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:54.459698 1147232 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:54.459807 1147232 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:54.459918 1147232 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:54.460026 1147232 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:54.460083 1147232 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:54.460190 1147232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:54.524149 1147232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:54.777800 1147232 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:33:54.921782 1147232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:55.044166 1147232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:55.204096 1147232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:55.204767 1147232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:55.207263 1147232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:51.442995 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.444424 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.944751 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.639209 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.639317 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.208851 1147232 out.go:204]   - Booting up control plane ...
	I0731 21:33:55.208977 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:55.209090 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:55.209331 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:55.229113 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:55.229800 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:55.229867 1147232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:55.356937 1147232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:33:55.357076 1147232 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:33:55.858979 1147232 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.083488ms
	I0731 21:33:55.859109 1147232 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:34:00.863345 1147232 kubeadm.go:310] [api-check] The API server is healthy after 5.002699171s
	I0731 21:34:00.879484 1147232 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:34:00.894019 1147232 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:34:00.928443 1147232 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:34:00.928739 1147232 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-563652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:34:00.941793 1147232 kubeadm.go:310] [bootstrap-token] Using token: zsizu4.9crnq3d9xqkkbhr5
	I0731 21:33:57.947020 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.442694 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:57.639666 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:59.640630 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:01.640684 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.943202 1147232 out.go:204]   - Configuring RBAC rules ...
	I0731 21:34:00.943358 1147232 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:34:00.951121 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:34:00.959955 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:34:00.963669 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:34:00.967795 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:34:00.972804 1147232 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:34:01.271139 1147232 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:34:01.705953 1147232 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:34:02.269466 1147232 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:34:02.271800 1147232 kubeadm.go:310] 
	I0731 21:34:02.271904 1147232 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:34:02.271915 1147232 kubeadm.go:310] 
	I0731 21:34:02.271994 1147232 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:34:02.272005 1147232 kubeadm.go:310] 
	I0731 21:34:02.272040 1147232 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:34:02.272127 1147232 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:34:02.272206 1147232 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:34:02.272212 1147232 kubeadm.go:310] 
	I0731 21:34:02.272290 1147232 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:34:02.272337 1147232 kubeadm.go:310] 
	I0731 21:34:02.272453 1147232 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:34:02.272477 1147232 kubeadm.go:310] 
	I0731 21:34:02.272557 1147232 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:34:02.272644 1147232 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:34:02.272735 1147232 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:34:02.272751 1147232 kubeadm.go:310] 
	I0731 21:34:02.272871 1147232 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:34:02.272972 1147232 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:34:02.272991 1147232 kubeadm.go:310] 
	I0731 21:34:02.273097 1147232 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273207 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:34:02.273252 1147232 kubeadm.go:310] 	--control-plane 
	I0731 21:34:02.273268 1147232 kubeadm.go:310] 
	I0731 21:34:02.273371 1147232 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:34:02.273381 1147232 kubeadm.go:310] 
	I0731 21:34:02.273492 1147232 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273643 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:34:02.274138 1147232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:34:02.274200 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:34:02.274221 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:34:02.275876 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:34:02.277208 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:34:02.287526 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:34:02.306070 1147232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:34:02.306192 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.306218 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-563652 minikube.k8s.io/updated_at=2024_07_31T21_34_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=embed-certs-563652 minikube.k8s.io/primary=true
	I0731 21:34:02.530554 1147232 ops.go:34] apiserver oom_adj: -16
	I0731 21:34:02.530710 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.031525 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.530812 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:04.030780 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.444274 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.443668 1148013 pod_ready.go:81] duration metric: took 4m0.00729593s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:04.443701 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:04.443712 1148013 pod_ready.go:38] duration metric: took 4m3.607055366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:04.443731 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:04.443795 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:04.443885 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:04.483174 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:04.483203 1148013 cri.go:89] found id: ""
	I0731 21:34:04.483212 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:04.483265 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.488570 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:04.488660 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:04.523705 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:04.523734 1148013 cri.go:89] found id: ""
	I0731 21:34:04.523745 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:04.523816 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.528231 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:04.528304 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:04.565303 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:04.565332 1148013 cri.go:89] found id: ""
	I0731 21:34:04.565341 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:04.565394 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.570089 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:04.570172 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:04.604648 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:04.604676 1148013 cri.go:89] found id: ""
	I0731 21:34:04.604686 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:04.604770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.609219 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:04.609306 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:04.644851 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.644876 1148013 cri.go:89] found id: ""
	I0731 21:34:04.644887 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:04.644954 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.649760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:04.649859 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:04.686438 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:04.686466 1148013 cri.go:89] found id: ""
	I0731 21:34:04.686477 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:04.686546 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.690707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:04.690791 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:04.726245 1148013 cri.go:89] found id: ""
	I0731 21:34:04.726276 1148013 logs.go:276] 0 containers: []
	W0731 21:34:04.726284 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:04.726291 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:04.726346 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:04.766009 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:04.766034 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.766038 1148013 cri.go:89] found id: ""
	I0731 21:34:04.766045 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:04.766105 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.770130 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.774449 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:04.774479 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.822626 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:04.822660 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.857618 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:04.857648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:04.908962 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:04.908993 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:04.962708 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:04.962759 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:04.977232 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:04.977271 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:05.109227 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:05.109264 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:05.163213 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:05.163250 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:05.200524 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:05.200564 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:05.242464 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:05.242501 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:05.278233 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:05.278263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:05.328930 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:05.328975 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:05.367827 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:05.367860 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:04.140237 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:06.641725 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.531795 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.030854 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.530821 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.031777 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.531171 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.030885 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.531555 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.031798 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.531512 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:09.031778 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.349628 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:08.364164 1148013 api_server.go:72] duration metric: took 4m15.266433533s to wait for apiserver process to appear ...
	I0731 21:34:08.364205 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:08.364257 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:08.364321 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:08.398165 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:08.398194 1148013 cri.go:89] found id: ""
	I0731 21:34:08.398205 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:08.398270 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.402707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:08.402780 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:08.444972 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:08.444998 1148013 cri.go:89] found id: ""
	I0731 21:34:08.445007 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:08.445067 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.449385 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:08.449458 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:08.487006 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:08.487040 1148013 cri.go:89] found id: ""
	I0731 21:34:08.487053 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:08.487123 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.491544 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:08.491618 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:08.526239 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:08.526271 1148013 cri.go:89] found id: ""
	I0731 21:34:08.526282 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:08.526334 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.530760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:08.530864 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:08.579799 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:08.579829 1148013 cri.go:89] found id: ""
	I0731 21:34:08.579844 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:08.579910 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.584172 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:08.584244 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:08.624614 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.624689 1148013 cri.go:89] found id: ""
	I0731 21:34:08.624703 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:08.624770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.629264 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:08.629340 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:08.669137 1148013 cri.go:89] found id: ""
	I0731 21:34:08.669170 1148013 logs.go:276] 0 containers: []
	W0731 21:34:08.669181 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:08.669189 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:08.669256 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:08.712145 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.712174 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:08.712179 1148013 cri.go:89] found id: ""
	I0731 21:34:08.712187 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:08.712246 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.717005 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.720992 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:08.721024 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.775824 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:08.775876 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.822904 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:08.822940 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:09.279585 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:09.279641 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:09.328597 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:09.328635 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:09.382901 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:09.382959 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:09.397461 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:09.397500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:09.437452 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:09.437494 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:09.472580 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:09.472614 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:09.512902 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:09.512938 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:09.558351 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:09.558394 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:09.669960 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:09.670001 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:09.714731 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:09.714770 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:09.140243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:11.639122 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:09.531101 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.031417 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.531369 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.031687 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.530902 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.030877 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.531359 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.030850 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.530829 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.030737 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.137727 1147232 kubeadm.go:1113] duration metric: took 11.831600904s to wait for elevateKubeSystemPrivileges
	I0731 21:34:14.137775 1147232 kubeadm.go:394] duration metric: took 5m10.826279216s to StartCluster
	I0731 21:34:14.137810 1147232 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.137941 1147232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:34:14.140680 1147232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.141051 1147232 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:34:14.141091 1147232 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:34:14.141199 1147232 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-563652"
	I0731 21:34:14.141240 1147232 addons.go:69] Setting default-storageclass=true in profile "embed-certs-563652"
	I0731 21:34:14.141263 1147232 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-563652"
	W0731 21:34:14.141272 1147232 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:34:14.141291 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:34:14.141302 1147232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-563652"
	I0731 21:34:14.141309 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141337 1147232 addons.go:69] Setting metrics-server=true in profile "embed-certs-563652"
	I0731 21:34:14.141362 1147232 addons.go:234] Setting addon metrics-server=true in "embed-certs-563652"
	W0731 21:34:14.141373 1147232 addons.go:243] addon metrics-server should already be in state true
	I0731 21:34:14.141400 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141735 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141802 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141745 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141876 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141748 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.142070 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.143403 1147232 out.go:177] * Verifying Kubernetes components...
	I0731 21:34:14.144894 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:34:14.160359 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0731 21:34:14.160405 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0731 21:34:14.160631 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0731 21:34:14.160893 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.160996 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161048 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161478 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161497 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161643 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161657 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161721 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161749 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.162028 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162069 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162029 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162250 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.162557 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162596 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.162654 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162675 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.166106 1147232 addons.go:234] Setting addon default-storageclass=true in "embed-certs-563652"
	W0731 21:34:14.166129 1147232 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:34:14.166153 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.166426 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.166463 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.179941 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I0731 21:34:14.180522 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.181056 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.181077 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.181522 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.181726 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.182994 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0731 21:34:14.183599 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.183753 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.183958 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0731 21:34:14.184127 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.184145 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.184538 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.184645 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185028 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.185047 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.185306 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.185343 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.185458 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185527 1147232 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:34:14.185650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.186884 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:34:14.186912 1147232 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:34:14.186937 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.187442 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.189035 1147232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:34:14.190019 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190617 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.190644 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190680 1147232 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.190700 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:34:14.190725 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.191369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.191607 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.191893 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.192265 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.194023 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194383 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.194407 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.194852 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.195073 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.195233 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.207044 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0731 21:34:14.207748 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.208292 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.208319 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.208759 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.208962 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.210554 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.210881 1147232 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.210902 1147232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:34:14.210925 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.214212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214803 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.215026 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214918 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.216141 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.216369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.216583 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.360826 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:34:14.379220 1147232 node_ready.go:35] waiting up to 6m0s for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387294 1147232 node_ready.go:49] node "embed-certs-563652" has status "Ready":"True"
	I0731 21:34:14.387331 1147232 node_ready.go:38] duration metric: took 8.073597ms for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387344 1147232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.392589 1147232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400252 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.400276 1147232 pod_ready.go:81] duration metric: took 7.654503ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400285 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405540 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.405564 1147232 pod_ready.go:81] duration metric: took 5.273822ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405573 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410097 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.410118 1147232 pod_ready.go:81] duration metric: took 4.539492ms for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410127 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414070 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.414094 1147232 pod_ready.go:81] duration metric: took 3.961128ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414101 1147232 pod_ready.go:38] duration metric: took 26.744925ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.414117 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:14.414166 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:14.427922 1147232 api_server.go:72] duration metric: took 286.820645ms to wait for apiserver process to appear ...
	I0731 21:34:14.427955 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:14.427976 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:34:14.433697 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:34:14.435062 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:14.435088 1147232 api_server.go:131] duration metric: took 7.125728ms to wait for apiserver health ...
	I0731 21:34:14.435096 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:10.689650 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:34:10.690301 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:10.690529 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:14.497864 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.523526 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:34:14.523560 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:34:14.523656 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.552390 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:34:14.552424 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:34:14.586389 1147232 system_pods.go:59] 4 kube-system pods found
	I0731 21:34:14.586421 1147232 system_pods.go:61] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.586426 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.586429 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.586433 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.586440 1147232 system_pods.go:74] duration metric: took 151.337561ms to wait for pod list to return data ...
	I0731 21:34:14.586448 1147232 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:14.613255 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.613292 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:34:14.677966 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.728484 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.728522 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.728906 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.728971 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.728992 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729005 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.729016 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.729280 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.729300 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729315 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736315 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.736340 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.736605 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736611 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.736628 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.783127 1147232 default_sa.go:45] found service account: "default"
	I0731 21:34:14.783169 1147232 default_sa.go:55] duration metric: took 196.713133ms for default service account to be created ...
	I0731 21:34:14.783181 1147232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:14.998421 1147232 system_pods.go:86] 5 kube-system pods found
	I0731 21:34:14.998459 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.998467 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.998476 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.998483 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending
	I0731 21:34:14.998488 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.998528 1147232 retry.go:31] will retry after 204.720881ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.239227 1147232 system_pods.go:86] 7 kube-system pods found
	I0731 21:34:15.239260 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending
	I0731 21:34:15.239268 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.239275 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.239281 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.239285 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.239291 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.239295 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.239316 1147232 retry.go:31] will retry after 274.032375ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.470562 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.470596 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.470970 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471046 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471059 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.471070 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.471082 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.471345 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471384 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471395 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.530409 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.530454 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530467 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530475 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.530483 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.530493 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.530501 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.530510 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.530540 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.530554 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.530575 1147232 retry.go:31] will retry after 306.456007ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.572796 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.572829 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573170 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573210 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573232 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.573245 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573542 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.573591 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573612 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573631 1147232 addons.go:475] Verifying addon metrics-server=true in "embed-certs-563652"
	I0731 21:34:15.576124 1147232 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0731 21:34:12.254258 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:34:12.259093 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:34:12.260261 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:12.260290 1148013 api_server.go:131] duration metric: took 3.896077544s to wait for apiserver health ...
	I0731 21:34:12.260299 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:12.260325 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:12.260383 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:12.302317 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.302350 1148013 cri.go:89] found id: ""
	I0731 21:34:12.302361 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:12.302435 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.306733 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:12.306821 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:12.342694 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:12.342719 1148013 cri.go:89] found id: ""
	I0731 21:34:12.342728 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:12.342788 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.346762 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:12.346848 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:12.382747 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.382772 1148013 cri.go:89] found id: ""
	I0731 21:34:12.382782 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:12.382851 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.386891 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:12.386988 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:12.424735 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.424768 1148013 cri.go:89] found id: ""
	I0731 21:34:12.424777 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:12.424842 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.430109 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:12.430193 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:12.466432 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.466457 1148013 cri.go:89] found id: ""
	I0731 21:34:12.466464 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:12.466520 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.470677 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:12.470761 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:12.509821 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:12.509847 1148013 cri.go:89] found id: ""
	I0731 21:34:12.509858 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:12.509926 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.514114 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:12.514199 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:12.560780 1148013 cri.go:89] found id: ""
	I0731 21:34:12.560810 1148013 logs.go:276] 0 containers: []
	W0731 21:34:12.560831 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:12.560841 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:12.560911 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:12.611528 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:12.611560 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.611566 1148013 cri.go:89] found id: ""
	I0731 21:34:12.611575 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:12.611643 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.615972 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.620046 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:12.620072 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:12.733715 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:12.733761 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.785864 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:12.785915 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.829467 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:12.829510 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.867566 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:12.867599 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.908038 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:12.908073 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.945425 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:12.945471 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:12.994911 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:12.994948 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:13.061451 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:13.061500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:13.107896 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:13.107947 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:13.164585 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:13.164627 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:13.206615 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:13.206648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:13.587405 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:13.587453 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:16.108951 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:16.108985 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.108990 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.108994 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.108998 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.109001 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.109004 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.109010 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.109015 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.109023 1148013 system_pods.go:74] duration metric: took 3.848717497s to wait for pod list to return data ...
	I0731 21:34:16.109031 1148013 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:16.112076 1148013 default_sa.go:45] found service account: "default"
	I0731 21:34:16.112124 1148013 default_sa.go:55] duration metric: took 3.083038ms for default service account to be created ...
	I0731 21:34:16.112135 1148013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:16.118191 1148013 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:16.118232 1148013 system_pods.go:89] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.118242 1148013 system_pods.go:89] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.118250 1148013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.118256 1148013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.118263 1148013 system_pods.go:89] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.118269 1148013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.118303 1148013 system_pods.go:89] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.118321 1148013 system_pods.go:89] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.118333 1148013 system_pods.go:126] duration metric: took 6.190349ms to wait for k8s-apps to be running ...
	I0731 21:34:16.118344 1148013 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.118404 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.137723 1148013 system_svc.go:56] duration metric: took 19.365234ms WaitForService to wait for kubelet
	I0731 21:34:16.137753 1148013 kubeadm.go:582] duration metric: took 4m23.040028763s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.137781 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.141708 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.141737 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.141805 1148013 node_conditions.go:105] duration metric: took 4.017229ms to run NodePressure ...
	I0731 21:34:16.141831 1148013 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.141849 1148013 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.141868 1148013 start.go:255] writing updated cluster config ...
	I0731 21:34:16.142163 1148013 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.203520 1148013 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.205072 1148013 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-755535" cluster and "default" namespace by default
	I0731 21:34:13.639431 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.640300 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.577285 1147232 addons.go:510] duration metric: took 1.436190545s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0731 21:34:15.848446 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.848480 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848487 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848496 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.848502 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.848507 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.848512 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.848516 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.848522 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.848527 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.848545 1147232 retry.go:31] will retry after 538.9255ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.397869 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.397924 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397937 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397946 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.397954 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.397962 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.397972 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:16.397979 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.397989 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.398003 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:16.398152 1147232 retry.go:31] will retry after 511.77725ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.917181 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.917219 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Running
	I0731 21:34:16.917228 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Running
	I0731 21:34:16.917234 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.917240 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.917248 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.917256 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Running
	I0731 21:34:16.917261 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.917272 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.917279 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Running
	I0731 21:34:16.917295 1147232 system_pods.go:126] duration metric: took 2.134102549s to wait for k8s-apps to be running ...
	I0731 21:34:16.917310 1147232 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.917365 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.932647 1147232 system_svc.go:56] duration metric: took 15.322111ms WaitForService to wait for kubelet
	I0731 21:34:16.932702 1147232 kubeadm.go:582] duration metric: took 2.791596331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.932730 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.935567 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.935589 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.935600 1147232 node_conditions.go:105] duration metric: took 2.864432ms to run NodePressure ...
	I0731 21:34:16.935614 1147232 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.935621 1147232 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.935631 1147232 start.go:255] writing updated cluster config ...
	I0731 21:34:16.935948 1147232 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.990670 1147232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.992682 1147232 out.go:177] * Done! kubectl is now configured to use "embed-certs-563652" cluster and "default" namespace by default
	I0731 21:34:15.690878 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:15.691156 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:18.139818 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:20.639113 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:23.140314 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.641086 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.691455 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:25.691639 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:28.139044 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:30.140499 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:32.640931 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:35.139207 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:36.640291 1146656 pod_ready.go:81] duration metric: took 4m0.007535985s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:36.640323 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:36.640334 1146656 pod_ready.go:38] duration metric: took 4m7.419160814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:36.640354 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:36.640393 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:36.640454 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:36.688629 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:36.688658 1146656 cri.go:89] found id: ""
	I0731 21:34:36.688668 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:36.688747 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.693261 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:36.693349 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:36.730997 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:36.731021 1146656 cri.go:89] found id: ""
	I0731 21:34:36.731028 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:36.731079 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.737624 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:36.737692 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:36.780734 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:36.780758 1146656 cri.go:89] found id: ""
	I0731 21:34:36.780769 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:36.780831 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.784767 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:36.784839 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:36.824129 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:36.824164 1146656 cri.go:89] found id: ""
	I0731 21:34:36.824174 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:36.824246 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.828299 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:36.828380 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:36.863976 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:36.864008 1146656 cri.go:89] found id: ""
	I0731 21:34:36.864017 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:36.864081 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.868516 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:36.868594 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:36.903106 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:36.903137 1146656 cri.go:89] found id: ""
	I0731 21:34:36.903148 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:36.903212 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.907260 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:36.907327 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:36.943921 1146656 cri.go:89] found id: ""
	I0731 21:34:36.943955 1146656 logs.go:276] 0 containers: []
	W0731 21:34:36.943963 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:36.943969 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:36.944025 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:36.979295 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:36.979327 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:36.979334 1146656 cri.go:89] found id: ""
	I0731 21:34:36.979345 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:36.979403 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.984464 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.988471 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:36.988511 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:37.121952 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:37.121995 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:37.169494 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:37.169546 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:37.205544 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:37.205577 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:37.255892 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:37.255930 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:37.292002 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:37.292036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:37.327852 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:37.327881 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:37.367753 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:37.367795 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:37.419399 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:37.419443 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:37.432894 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:37.432938 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:37.474408 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:37.474454 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:37.508203 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:37.508246 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:37.550030 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:37.550072 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:40.551728 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:40.566959 1146656 api_server.go:72] duration metric: took 4m19.080511832s to wait for apiserver process to appear ...
	I0731 21:34:40.567027 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:40.567085 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:40.567153 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:40.617492 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:40.617529 1146656 cri.go:89] found id: ""
	I0731 21:34:40.617539 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:40.617605 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.621950 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:40.622023 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:40.664964 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:40.664990 1146656 cri.go:89] found id: ""
	I0731 21:34:40.664998 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:40.665052 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.669257 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:40.669353 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:40.705806 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:40.705842 1146656 cri.go:89] found id: ""
	I0731 21:34:40.705854 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:40.705920 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.710069 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:40.710146 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:40.746331 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:40.746358 1146656 cri.go:89] found id: ""
	I0731 21:34:40.746368 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:40.746420 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.754270 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:40.754364 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:40.791320 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:40.791356 1146656 cri.go:89] found id: ""
	I0731 21:34:40.791367 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:40.791435 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.795691 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:40.795773 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:40.835548 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:40.835578 1146656 cri.go:89] found id: ""
	I0731 21:34:40.835589 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:40.835652 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.839854 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:40.839939 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:40.874322 1146656 cri.go:89] found id: ""
	I0731 21:34:40.874358 1146656 logs.go:276] 0 containers: []
	W0731 21:34:40.874369 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:40.874379 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:40.874448 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:40.922665 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:40.922691 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.922695 1146656 cri.go:89] found id: ""
	I0731 21:34:40.922703 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:40.922762 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.926750 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.930612 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:40.930640 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.966656 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:40.966695 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:41.401560 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:41.401622 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:41.503991 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:41.504036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:41.552765 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:41.552816 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:41.588315 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:41.588353 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:41.639790 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:41.639832 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:41.679851 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:41.679891 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:41.716182 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:41.716219 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:41.762445 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:41.762493 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:41.815762 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:41.815810 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:41.829753 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:41.829794 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:41.874703 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:41.874745 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.415559 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:34:44.420498 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:34:44.421648 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:34:44.421678 1146656 api_server.go:131] duration metric: took 3.854640091s to wait for apiserver health ...
	I0731 21:34:44.421690 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:44.421724 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:44.421786 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:44.456716 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.456744 1146656 cri.go:89] found id: ""
	I0731 21:34:44.456755 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:44.456809 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.460762 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:44.460836 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:44.498325 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.498352 1146656 cri.go:89] found id: ""
	I0731 21:34:44.498361 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:44.498416 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.502344 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:44.502424 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:44.538766 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.538799 1146656 cri.go:89] found id: ""
	I0731 21:34:44.538809 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:44.538874 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.542853 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:44.542946 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:44.578142 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:44.578175 1146656 cri.go:89] found id: ""
	I0731 21:34:44.578185 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:44.578241 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.582494 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:44.582574 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:44.631110 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:44.631141 1146656 cri.go:89] found id: ""
	I0731 21:34:44.631149 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:44.631208 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.635618 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:44.635693 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:44.669607 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:44.669633 1146656 cri.go:89] found id: ""
	I0731 21:34:44.669643 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:44.669702 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.673967 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:44.674052 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:44.723388 1146656 cri.go:89] found id: ""
	I0731 21:34:44.723417 1146656 logs.go:276] 0 containers: []
	W0731 21:34:44.723426 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:44.723433 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:44.723485 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:44.759398 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:44.759423 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:44.759429 1146656 cri.go:89] found id: ""
	I0731 21:34:44.759438 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:44.759506 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.765787 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.769602 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:44.769627 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:44.783608 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:44.783646 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:44.897376 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:44.897415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.941518 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:44.941558 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.976285 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:44.976319 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:45.015310 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:45.015343 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:45.076253 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:45.076298 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:45.114621 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:45.114656 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:45.171369 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:45.171415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:45.219450 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:45.219492 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:45.254864 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:45.254901 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:45.289962 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:45.289999 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:45.660050 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:45.660113 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:48.211383 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:48.211418 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.211423 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.211427 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.211431 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.211435 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.211440 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.211449 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.211456 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.211467 1146656 system_pods.go:74] duration metric: took 3.789769058s to wait for pod list to return data ...
	I0731 21:34:48.211490 1146656 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:48.214462 1146656 default_sa.go:45] found service account: "default"
	I0731 21:34:48.214492 1146656 default_sa.go:55] duration metric: took 2.992385ms for default service account to be created ...
	I0731 21:34:48.214501 1146656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:48.220257 1146656 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:48.220289 1146656 system_pods.go:89] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.220295 1146656 system_pods.go:89] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.220299 1146656 system_pods.go:89] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.220304 1146656 system_pods.go:89] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.220309 1146656 system_pods.go:89] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.220313 1146656 system_pods.go:89] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.220322 1146656 system_pods.go:89] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.220328 1146656 system_pods.go:89] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.220339 1146656 system_pods.go:126] duration metric: took 5.831037ms to wait for k8s-apps to be running ...
	I0731 21:34:48.220352 1146656 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:48.220404 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:48.235707 1146656 system_svc.go:56] duration metric: took 15.341391ms WaitForService to wait for kubelet
	I0731 21:34:48.235747 1146656 kubeadm.go:582] duration metric: took 4m26.749308267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:48.235769 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:48.239352 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:48.239377 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:48.239388 1146656 node_conditions.go:105] duration metric: took 3.614275ms to run NodePressure ...
	I0731 21:34:48.239400 1146656 start.go:241] waiting for startup goroutines ...
	I0731 21:34:48.239407 1146656 start.go:246] waiting for cluster config update ...
	I0731 21:34:48.239418 1146656 start.go:255] writing updated cluster config ...
	I0731 21:34:48.239724 1146656 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:48.291567 1146656 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:34:48.293377 1146656 out.go:177] * Done! kubectl is now configured to use "no-preload-018891" cluster and "default" namespace by default
	I0731 21:34:45.692895 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:45.693194 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695071 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:35:25.695336 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695369 1147424 kubeadm.go:310] 
	I0731 21:35:25.695432 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:35:25.695496 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:35:25.695506 1147424 kubeadm.go:310] 
	I0731 21:35:25.695560 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:35:25.695606 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:35:25.695752 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:35:25.695775 1147424 kubeadm.go:310] 
	I0731 21:35:25.695866 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:35:25.695914 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:35:25.695965 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:35:25.695972 1147424 kubeadm.go:310] 
	I0731 21:35:25.696064 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:35:25.696197 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:35:25.696218 1147424 kubeadm.go:310] 
	I0731 21:35:25.696389 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:35:25.696510 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:35:25.696637 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:35:25.696739 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:35:25.696761 1147424 kubeadm.go:310] 
	I0731 21:35:25.697342 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:35:25.697447 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:35:25.697582 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:35:25.697782 1147424 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:35:25.697852 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:35:31.094319 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.396429611s)
	I0731 21:35:31.094410 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:35:31.109019 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:35:31.118415 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:35:31.118447 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:35:31.118512 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:35:31.129005 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:35:31.129097 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:35:31.139701 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:35:31.149483 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:35:31.149565 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:35:31.158699 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.168151 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:35:31.168225 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.177911 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:35:31.186739 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:35:31.186821 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:35:31.196779 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:35:31.410613 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:37:27.101986 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:37:27.102135 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:37:27.103680 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:37:27.103742 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:37:27.103874 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:37:27.103971 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:37:27.104056 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:37:27.104135 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:37:27.105757 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:37:27.105851 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:37:27.105911 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:37:27.105982 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:37:27.106047 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:37:27.106126 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:37:27.106185 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:37:27.106256 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:37:27.106340 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:37:27.106446 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:37:27.106527 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:37:27.106582 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:37:27.106669 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:37:27.106747 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:37:27.106800 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:37:27.106853 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:37:27.106928 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:37:27.107053 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:37:27.107169 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:37:27.107233 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:37:27.107307 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:37:27.108810 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:37:27.108897 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:37:27.108964 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:37:27.109022 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:37:27.109090 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:37:27.109227 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:37:27.109276 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:37:27.109346 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109569 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109655 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109876 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109947 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110108 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110172 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110334 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110393 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110549 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110556 1147424 kubeadm.go:310] 
	I0731 21:37:27.110589 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:37:27.110626 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:37:27.110632 1147424 kubeadm.go:310] 
	I0731 21:37:27.110661 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:37:27.110707 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:37:27.110804 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:37:27.110816 1147424 kubeadm.go:310] 
	I0731 21:37:27.110920 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:37:27.110965 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:37:27.110999 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:37:27.111006 1147424 kubeadm.go:310] 
	I0731 21:37:27.111099 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:37:27.111173 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:37:27.111181 1147424 kubeadm.go:310] 
	I0731 21:37:27.111284 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:37:27.111357 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:37:27.111421 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:37:27.111501 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:37:27.111545 1147424 kubeadm.go:310] 
	I0731 21:37:27.111591 1147424 kubeadm.go:394] duration metric: took 8m1.593977042s to StartCluster
	I0731 21:37:27.111642 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:37:27.111732 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:37:27.151036 1147424 cri.go:89] found id: ""
	I0731 21:37:27.151080 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.151092 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:37:27.151101 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:37:27.151164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:37:27.189839 1147424 cri.go:89] found id: ""
	I0731 21:37:27.189877 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.189897 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:37:27.189906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:37:27.189975 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:37:27.224515 1147424 cri.go:89] found id: ""
	I0731 21:37:27.224553 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.224566 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:37:27.224574 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:37:27.224637 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:37:27.256890 1147424 cri.go:89] found id: ""
	I0731 21:37:27.256927 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.256939 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:37:27.256948 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:37:27.257017 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:37:27.292320 1147424 cri.go:89] found id: ""
	I0731 21:37:27.292360 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.292373 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:37:27.292380 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:37:27.292448 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:37:27.327537 1147424 cri.go:89] found id: ""
	I0731 21:37:27.327580 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.327591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:37:27.327600 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:37:27.327669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:37:27.362489 1147424 cri.go:89] found id: ""
	I0731 21:37:27.362522 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.362533 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:37:27.362541 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:37:27.362612 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:37:27.398531 1147424 cri.go:89] found id: ""
	I0731 21:37:27.398575 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.398587 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:37:27.398605 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:37:27.398625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:37:27.412082 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:37:27.412129 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:37:27.485574 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:37:27.485598 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:37:27.485615 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:37:27.602979 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:37:27.603026 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:37:27.642075 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:37:27.642108 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 21:37:27.692811 1147424 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:37:27.692868 1147424 out.go:239] * 
	W0731 21:37:27.692944 1147424 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.692968 1147424 out.go:239] * 
	W0731 21:37:27.693763 1147424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:37:27.697049 1147424 out.go:177] 
	W0731 21:37:27.698454 1147424 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.698525 1147424 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:37:27.698564 1147424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:37:27.700008 1147424 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.260648282Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462393260628765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9210a74e-d7dd-47e7-b01f-65c4afa2d4cd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.261094486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91ceded7-2b2b-4a1c-9cf2-84a85ec56d4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.261156046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91ceded7-2b2b-4a1c-9cf2-84a85ec56d4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.261191016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=91ceded7-2b2b-4a1c-9cf2-84a85ec56d4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.293887312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be4c065c-44f6-4898-a65e-8210524b06c9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.293972984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be4c065c-44f6-4898-a65e-8210524b06c9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.295077036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=852e20e0-97bb-4a46-9432-3985eca654a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.295474510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462393295451505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=852e20e0-97bb-4a46-9432-3985eca654a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.296073239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cf907f5-7cd7-4161-b3c8-7aee44c742d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.296154186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cf907f5-7cd7-4161-b3c8-7aee44c742d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.296202203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0cf907f5-7cd7-4161-b3c8-7aee44c742d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.328792630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b77c1de2-edab-436d-a0cb-76404469dfe4 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.328863585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b77c1de2-edab-436d-a0cb-76404469dfe4 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.329684400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c7f5eab-457a-44b8-9bd3-fcc8689eb661 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.330076657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462393330054781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c7f5eab-457a-44b8-9bd3-fcc8689eb661 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.330513301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d98e6403-055f-4460-bc6d-0c86a829c797 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.330560341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d98e6403-055f-4460-bc6d-0c86a829c797 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.330592069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d98e6403-055f-4460-bc6d-0c86a829c797 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.361035009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1137fa70-ee01-4b98-a17f-66b5357e5399 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.361110033Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1137fa70-ee01-4b98-a17f-66b5357e5399 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.362002078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73cf441e-11db-4ea0-be30-63816311c2e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.362340894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462393362322765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73cf441e-11db-4ea0-be30-63816311c2e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.362885254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41a77180-0d4c-45e1-91b7-ade14c929b05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.362934972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41a77180-0d4c-45e1-91b7-ade14c929b05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:46:33 old-k8s-version-275462 crio[640]: time="2024-07-31 21:46:33.362966420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=41a77180-0d4c-45e1-91b7-ade14c929b05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 21:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048242] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037912] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.873982] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.920716] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.346172] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.912930] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.065585] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061848] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.166323] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.160547] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.289426] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.100697] systemd-fstab-generator[825]: Ignoring "noauto" option for root device
	[  +0.056106] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.885879] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[ +12.535811] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 21:33] systemd-fstab-generator[4947]: Ignoring "noauto" option for root device
	[Jul31 21:35] systemd-fstab-generator[5234]: Ignoring "noauto" option for root device
	[  +0.067044] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:46:33 up 17 min,  0 users,  load average: 0.00, 0.03, 0.03
	Linux old-k8s-version-275462 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000afb280, 0xc000c900a0)
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]: goroutine 171 [syscall]:
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]: syscall.Syscall6(0xe8, 0xf, 0xc000d0fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xf, 0xc000d0fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0008af620, 0x0, 0x0, 0x0)
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000463d10)
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6415]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 31 21:46:28 old-k8s-version-275462 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 21:46:28 old-k8s-version-275462 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 21:46:28 old-k8s-version-275462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 31 21:46:28 old-k8s-version-275462 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 21:46:28 old-k8s-version-275462 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6424]: I0731 21:46:28.771124    6424 server.go:416] Version: v1.20.0
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6424]: I0731 21:46:28.771377    6424 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6424]: I0731 21:46:28.774073    6424 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6424]: W0731 21:46:28.775602    6424 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 31 21:46:28 old-k8s-version-275462 kubelet[6424]: I0731 21:46:28.776761    6424 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 2 (244.993572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-275462" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:52:21.76016596 +0000 UTC m=+6182.918597093
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-755535 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-755535 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (87.082912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-755535 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-755535 logs -n 25
E0731 21:52:23.067848 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-755535 logs -n 25: (1.31433545s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-605794 sudo cat                              | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo docker                           | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo systemctl                        | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo systemctl                        | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo cat                              | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo cat                              | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo                                  | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo systemctl                        | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo systemctl                        | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo cat                              | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo cat                              | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo containerd                       | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo systemctl                        | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo systemctl                        | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo find                             | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-605794 sudo crio                             | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-605794                                       | auto-605794    | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC | 31 Jul 24 21:51 UTC |
	| start   | -p kindnet-605794                                    | kindnet-605794 | jenkins | v1.33.1 | 31 Jul 24 21:51 UTC |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                          |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| ssh     | -p calico-605794 pgrep -a                            | calico-605794  | jenkins | v1.33.1 | 31 Jul 24 21:52 UTC | 31 Jul 24 21:52 UTC |
	|         | kubelet                                              |                |         |         |                     |                     |
	| ssh     | -p calico-605794 sudo cat                            | calico-605794  | jenkins | v1.33.1 | 31 Jul 24 21:52 UTC | 31 Jul 24 21:52 UTC |
	|         | /etc/nsswitch.conf                                   |                |         |         |                     |                     |
	| ssh     | -p calico-605794 sudo cat                            | calico-605794  | jenkins | v1.33.1 | 31 Jul 24 21:52 UTC | 31 Jul 24 21:52 UTC |
	|         | /etc/hosts                                           |                |         |         |                     |                     |
	| ssh     | -p calico-605794 sudo cat                            | calico-605794  | jenkins | v1.33.1 | 31 Jul 24 21:52 UTC | 31 Jul 24 21:52 UTC |
	|         | /etc/resolv.conf                                     |                |         |         |                     |                     |
	| ssh     | -p calico-605794 sudo crictl                         | calico-605794  | jenkins | v1.33.1 | 31 Jul 24 21:52 UTC | 31 Jul 24 21:52 UTC |
	|         | pods                                                 |                |         |         |                     |                     |
	| ssh     | -p calico-605794 sudo crictl                         | calico-605794  | jenkins | v1.33.1 | 31 Jul 24 21:52 UTC | 31 Jul 24 21:52 UTC |
	|         | ps --all                                             |                |         |         |                     |                     |
	| ssh     | -p calico-605794 sudo find                           | calico-605794  | jenkins | v1.33.1 | 31 Jul 24 21:52 UTC |                     |
	|         | /etc/cni -type f -exec sh -c                         |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:51:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:51:27.531884 1157553 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:51:27.532445 1157553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:51:27.532460 1157553 out.go:304] Setting ErrFile to fd 2...
	I0731 21:51:27.532468 1157553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:51:27.532668 1157553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:51:27.533294 1157553 out.go:298] Setting JSON to false
	I0731 21:51:27.534534 1157553 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":20038,"bootTime":1722442649,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:51:27.534607 1157553 start.go:139] virtualization: kvm guest
	I0731 21:51:27.536403 1157553 out.go:177] * [kindnet-605794] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:51:27.537661 1157553 notify.go:220] Checking for updates...
	I0731 21:51:27.537685 1157553 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:51:27.539127 1157553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:51:27.540530 1157553 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:51:27.541815 1157553 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:51:27.543134 1157553 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:51:27.544350 1157553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:51:27.546174 1157553 config.go:182] Loaded profile config "calico-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:51:27.546349 1157553 config.go:182] Loaded profile config "custom-flannel-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:51:27.546486 1157553 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:51:27.546661 1157553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:51:27.586672 1157553 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:51:27.587961 1157553 start.go:297] selected driver: kvm2
	I0731 21:51:27.587986 1157553 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:51:27.588003 1157553 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:51:27.589242 1157553 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:51:27.589346 1157553 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:51:27.608033 1157553 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:51:27.608235 1157553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:51:27.608498 1157553 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:51:27.608569 1157553 cni.go:84] Creating CNI manager for "kindnet"
	I0731 21:51:27.608578 1157553 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 21:51:27.608663 1157553 start.go:340] cluster config:
	{Name:kindnet-605794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:51:27.608799 1157553 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:51:27.610558 1157553 out.go:177] * Starting "kindnet-605794" primary control-plane node in "kindnet-605794" cluster
	I0731 21:51:24.468675 1155232 node_ready.go:53] node "calico-605794" has status "Ready":"False"
	I0731 21:51:26.968073 1155232 node_ready.go:53] node "calico-605794" has status "Ready":"False"
	I0731 21:51:29.126665 1155232 node_ready.go:53] node "calico-605794" has status "Ready":"False"
	I0731 21:51:28.821301 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:28.821814 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | unable to find current IP address of domain custom-flannel-605794 in network mk-custom-flannel-605794
	I0731 21:51:28.821841 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | I0731 21:51:28.821763 1156131 retry.go:31] will retry after 3.763704162s: waiting for machine to come up
	I0731 21:51:27.611802 1157553 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:51:27.611855 1157553 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:51:27.611868 1157553 cache.go:56] Caching tarball of preloaded images
	I0731 21:51:27.611944 1157553 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:51:27.611956 1157553 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:51:27.612072 1157553 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/config.json ...
	I0731 21:51:27.612121 1157553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/config.json: {Name:mk8c69538ec9463f268b0ab790d6f9e2543491be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:51:27.612291 1157553 start.go:360] acquireMachinesLock for kindnet-605794: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:51:34.124895 1157553 start.go:364] duration metric: took 6.512582169s to acquireMachinesLock for "kindnet-605794"
	I0731 21:51:34.124972 1157553 start.go:93] Provisioning new machine with config: &{Name:kindnet-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:kindnet-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:51:34.125120 1157553 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 21:51:30.973401 1155232 node_ready.go:49] node "calico-605794" has status "Ready":"True"
	I0731 21:51:30.973428 1155232 node_ready.go:38] duration metric: took 8.508880476s for node "calico-605794" to be "Ready" ...
	I0731 21:51:30.973439 1155232 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:51:30.981323 1155232 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:32.989524 1155232 pod_ready.go:102] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"False"
	I0731 21:51:32.587557 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:32.588198 1156100 main.go:141] libmachine: (custom-flannel-605794) Found IP for machine: 192.168.50.144
	I0731 21:51:32.588224 1156100 main.go:141] libmachine: (custom-flannel-605794) Reserving static IP address...
	I0731 21:51:32.588238 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has current primary IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:32.588631 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | unable to find host DHCP lease matching {name: "custom-flannel-605794", mac: "52:54:00:c8:43:07", ip: "192.168.50.144"} in network mk-custom-flannel-605794
	I0731 21:51:32.677879 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | Getting to WaitForSSH function...
	I0731 21:51:32.677918 1156100 main.go:141] libmachine: (custom-flannel-605794) Reserved static IP address: 192.168.50.144
	I0731 21:51:32.677933 1156100 main.go:141] libmachine: (custom-flannel-605794) Waiting for SSH to be available...
	I0731 21:51:32.680952 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:32.681381 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:32.681411 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:32.681555 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | Using SSH client type: external
	I0731 21:51:32.681585 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/custom-flannel-605794/id_rsa (-rw-------)
	I0731 21:51:32.681630 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/custom-flannel-605794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:51:32.681649 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | About to run SSH command:
	I0731 21:51:32.681667 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | exit 0
	I0731 21:51:32.808445 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | SSH cmd err, output: <nil>: 
	I0731 21:51:32.808701 1156100 main.go:141] libmachine: (custom-flannel-605794) KVM machine creation complete!
	I0731 21:51:32.809040 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetConfigRaw
	I0731 21:51:32.809596 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:51:32.809806 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:51:32.809981 1156100 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 21:51:32.809998 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetState
	I0731 21:51:32.811591 1156100 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 21:51:32.811627 1156100 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 21:51:32.811635 1156100 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 21:51:32.811644 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:32.814115 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:32.814512 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:32.814539 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:32.814662 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:32.814848 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:32.815048 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:32.815228 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:32.815406 1156100 main.go:141] libmachine: Using SSH client type: native
	I0731 21:51:32.815654 1156100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0731 21:51:32.815668 1156100 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 21:51:32.923445 1156100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:51:32.923475 1156100 main.go:141] libmachine: Detecting the provisioner...
	I0731 21:51:32.923487 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:32.926244 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:32.926627 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:32.926672 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:32.926890 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:32.927109 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:32.927288 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:32.927417 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:32.927618 1156100 main.go:141] libmachine: Using SSH client type: native
	I0731 21:51:32.927810 1156100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0731 21:51:32.927825 1156100 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 21:51:33.041086 1156100 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 21:51:33.041224 1156100 main.go:141] libmachine: found compatible host: buildroot
	I0731 21:51:33.041248 1156100 main.go:141] libmachine: Provisioning with buildroot...
	I0731 21:51:33.041264 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetMachineName
	I0731 21:51:33.041581 1156100 buildroot.go:166] provisioning hostname "custom-flannel-605794"
	I0731 21:51:33.041614 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetMachineName
	I0731 21:51:33.041846 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:33.044985 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.045353 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:33.045385 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.045570 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:33.045790 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:33.046005 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:33.046213 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:33.046447 1156100 main.go:141] libmachine: Using SSH client type: native
	I0731 21:51:33.046703 1156100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0731 21:51:33.046722 1156100 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-605794 && echo "custom-flannel-605794" | sudo tee /etc/hostname
	I0731 21:51:33.177847 1156100 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-605794
	
	I0731 21:51:33.177885 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:33.180890 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.181278 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:33.181301 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.181545 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:33.181769 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:33.181969 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:33.182127 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:33.182329 1156100 main.go:141] libmachine: Using SSH client type: native
	I0731 21:51:33.182581 1156100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0731 21:51:33.182605 1156100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-605794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-605794/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-605794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:51:33.304823 1156100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:51:33.304857 1156100 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:51:33.304897 1156100 buildroot.go:174] setting up certificates
	I0731 21:51:33.304911 1156100 provision.go:84] configureAuth start
	I0731 21:51:33.304923 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetMachineName
	I0731 21:51:33.305274 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetIP
	I0731 21:51:33.308437 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.308912 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:33.308941 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.309117 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:33.311709 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.312077 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:33.312131 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.312391 1156100 provision.go:143] copyHostCerts
	I0731 21:51:33.312507 1156100 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:51:33.312524 1156100 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:51:33.312612 1156100 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:51:33.312737 1156100 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:51:33.312750 1156100 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:51:33.312792 1156100 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:51:33.312861 1156100 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:51:33.312869 1156100 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:51:33.312891 1156100 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:51:33.312939 1156100 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-605794 san=[127.0.0.1 192.168.50.144 custom-flannel-605794 localhost minikube]
	I0731 21:51:33.381321 1156100 provision.go:177] copyRemoteCerts
	I0731 21:51:33.381395 1156100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:51:33.381423 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:33.384756 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.385257 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:33.385291 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.385568 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:33.385791 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:33.385984 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:33.386173 1156100 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/custom-flannel-605794/id_rsa Username:docker}
	I0731 21:51:33.475653 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:51:33.503527 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0731 21:51:33.529573 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:51:33.555160 1156100 provision.go:87] duration metric: took 250.230858ms to configureAuth
	I0731 21:51:33.555200 1156100 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:51:33.555411 1156100 config.go:182] Loaded profile config "custom-flannel-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:51:33.555533 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:33.558407 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.558813 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:33.558845 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.559089 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:33.559333 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:33.559517 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:33.559720 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:33.559940 1156100 main.go:141] libmachine: Using SSH client type: native
	I0731 21:51:33.560204 1156100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0731 21:51:33.560225 1156100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:51:33.865456 1156100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:51:33.865494 1156100 main.go:141] libmachine: Checking connection to Docker...
	I0731 21:51:33.865507 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetURL
	I0731 21:51:33.866985 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | Using libvirt version 6000000
	I0731 21:51:33.869828 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.870287 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:33.870319 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.870487 1156100 main.go:141] libmachine: Docker is up and running!
	I0731 21:51:33.870504 1156100 main.go:141] libmachine: Reticulating splines...
	I0731 21:51:33.870513 1156100 client.go:171] duration metric: took 22.476796317s to LocalClient.Create
	I0731 21:51:33.870543 1156100 start.go:167] duration metric: took 22.476863091s to libmachine.API.Create "custom-flannel-605794"
	I0731 21:51:33.870579 1156100 start.go:293] postStartSetup for "custom-flannel-605794" (driver="kvm2")
	I0731 21:51:33.870596 1156100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:51:33.870623 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:51:33.870887 1156100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:51:33.870910 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:33.873653 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.874105 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:33.874134 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:33.874282 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:33.874503 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:33.874678 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:33.874850 1156100 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/custom-flannel-605794/id_rsa Username:docker}
	I0731 21:51:33.959297 1156100 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:51:33.964172 1156100 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:51:33.964203 1156100 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:51:33.964280 1156100 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:51:33.964383 1156100 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:51:33.964540 1156100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:51:33.976273 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:51:34.006960 1156100 start.go:296] duration metric: took 136.357555ms for postStartSetup
	I0731 21:51:34.007021 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetConfigRaw
	I0731 21:51:34.008171 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetIP
	I0731 21:51:34.011462 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.011834 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:34.011875 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.012181 1156100 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/config.json ...
	I0731 21:51:34.012388 1156100 start.go:128] duration metric: took 22.639770917s to createHost
	I0731 21:51:34.012416 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:34.014631 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.014981 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:34.015009 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.015211 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:34.015414 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:34.015593 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:34.015786 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:34.015984 1156100 main.go:141] libmachine: Using SSH client type: native
	I0731 21:51:34.016242 1156100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.144 22 <nil> <nil>}
	I0731 21:51:34.016262 1156100 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:51:34.124742 1156100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722462694.081232973
	
	I0731 21:51:34.124771 1156100 fix.go:216] guest clock: 1722462694.081232973
	I0731 21:51:34.124779 1156100 fix.go:229] Guest: 2024-07-31 21:51:34.081232973 +0000 UTC Remote: 2024-07-31 21:51:34.012401033 +0000 UTC m=+22.769424537 (delta=68.83194ms)
	I0731 21:51:34.124802 1156100 fix.go:200] guest clock delta is within tolerance: 68.83194ms
	I0731 21:51:34.124809 1156100 start.go:83] releasing machines lock for "custom-flannel-605794", held for 22.752309932s
	I0731 21:51:34.124837 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:51:34.125173 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetIP
	I0731 21:51:34.128371 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.128818 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:34.128884 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.129211 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:51:34.129925 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:51:34.130139 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:51:34.130238 1156100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:51:34.130287 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:34.130432 1156100 ssh_runner.go:195] Run: cat /version.json
	I0731 21:51:34.130470 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:51:34.133513 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.133969 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.134118 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:34.134276 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.134320 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:34.134522 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:34.134645 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:34.134680 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:34.134680 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:34.135017 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:51:34.135005 1156100 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/custom-flannel-605794/id_rsa Username:docker}
	I0731 21:51:34.135198 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:51:34.135377 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:51:34.135545 1156100 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/custom-flannel-605794/id_rsa Username:docker}
	I0731 21:51:34.222774 1156100 ssh_runner.go:195] Run: systemctl --version
	I0731 21:51:34.250131 1156100 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:51:34.424338 1156100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:51:34.430495 1156100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:51:34.430588 1156100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:51:34.449866 1156100 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:51:34.449899 1156100 start.go:495] detecting cgroup driver to use...
	I0731 21:51:34.449979 1156100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:51:34.472245 1156100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:51:34.489685 1156100 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:51:34.489765 1156100 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:51:34.508871 1156100 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:51:34.530805 1156100 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:51:34.680774 1156100 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:51:34.874250 1156100 docker.go:233] disabling docker service ...
	I0731 21:51:34.874336 1156100 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:51:34.891659 1156100 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:51:34.907684 1156100 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:51:35.085005 1156100 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:51:35.260355 1156100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:51:35.277203 1156100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:51:35.299906 1156100 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:51:35.299985 1156100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:51:35.314023 1156100 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:51:35.314109 1156100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:51:35.328399 1156100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:51:35.342103 1156100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:51:35.354723 1156100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:51:35.370988 1156100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:51:35.383370 1156100 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:51:35.403569 1156100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:51:35.418380 1156100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:51:35.431191 1156100 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:51:35.431264 1156100 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:51:35.447424 1156100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:51:35.461325 1156100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:51:35.621369 1156100 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:51:35.809338 1156100 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:51:35.809431 1156100 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:51:35.814787 1156100 start.go:563] Will wait 60s for crictl version
	I0731 21:51:35.814860 1156100 ssh_runner.go:195] Run: which crictl
	I0731 21:51:35.819716 1156100 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:51:35.869124 1156100 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:51:35.869226 1156100 ssh_runner.go:195] Run: crio --version
	I0731 21:51:35.906279 1156100 ssh_runner.go:195] Run: crio --version
	I0731 21:51:35.942301 1156100 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:51:35.943627 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetIP
	I0731 21:51:35.947411 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:35.947936 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:51:35.947957 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:51:35.948330 1156100 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 21:51:35.953899 1156100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:51:35.972473 1156100 kubeadm.go:883] updating cluster {Name:custom-flannel-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.3 ClusterName:custom-flannel-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:51:35.972608 1156100 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:51:35.972673 1156100 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:51:36.013605 1156100 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:51:36.013696 1156100 ssh_runner.go:195] Run: which lz4
	I0731 21:51:36.018002 1156100 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:51:36.022518 1156100 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:51:36.022569 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:51:34.127470 1157553 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 21:51:34.127715 1157553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:51:34.127752 1157553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:51:34.149464 1157553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I0731 21:51:34.150188 1157553 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:51:34.150804 1157553 main.go:141] libmachine: Using API Version  1
	I0731 21:51:34.150829 1157553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:51:34.151177 1157553 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:51:34.151394 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetMachineName
	I0731 21:51:34.151612 1157553 main.go:141] libmachine: (kindnet-605794) Calling .DriverName
	I0731 21:51:34.151775 1157553 start.go:159] libmachine.API.Create for "kindnet-605794" (driver="kvm2")
	I0731 21:51:34.151813 1157553 client.go:168] LocalClient.Create starting
	I0731 21:51:34.151862 1157553 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 21:51:34.151908 1157553 main.go:141] libmachine: Decoding PEM data...
	I0731 21:51:34.151929 1157553 main.go:141] libmachine: Parsing certificate...
	I0731 21:51:34.152019 1157553 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 21:51:34.152052 1157553 main.go:141] libmachine: Decoding PEM data...
	I0731 21:51:34.152068 1157553 main.go:141] libmachine: Parsing certificate...
	I0731 21:51:34.152121 1157553 main.go:141] libmachine: Running pre-create checks...
	I0731 21:51:34.152138 1157553 main.go:141] libmachine: (kindnet-605794) Calling .PreCreateCheck
	I0731 21:51:34.152532 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetConfigRaw
	I0731 21:51:34.153002 1157553 main.go:141] libmachine: Creating machine...
	I0731 21:51:34.153018 1157553 main.go:141] libmachine: (kindnet-605794) Calling .Create
	I0731 21:51:34.153162 1157553 main.go:141] libmachine: (kindnet-605794) Creating KVM machine...
	I0731 21:51:34.154754 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found existing default KVM network
	I0731 21:51:34.157899 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:34.156259 1157620 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b1:2b:9e} reservation:<nil>}
	I0731 21:51:34.157964 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:34.157846 1157620 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:11:dd:02} reservation:<nil>}
	I0731 21:51:34.160375 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:34.159623 1157620 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030acd0}
	I0731 21:51:34.160409 1157553 main.go:141] libmachine: (kindnet-605794) DBG | created network xml: 
	I0731 21:51:34.160423 1157553 main.go:141] libmachine: (kindnet-605794) DBG | <network>
	I0731 21:51:34.160431 1157553 main.go:141] libmachine: (kindnet-605794) DBG |   <name>mk-kindnet-605794</name>
	I0731 21:51:34.160439 1157553 main.go:141] libmachine: (kindnet-605794) DBG |   <dns enable='no'/>
	I0731 21:51:34.160446 1157553 main.go:141] libmachine: (kindnet-605794) DBG |   
	I0731 21:51:34.160458 1157553 main.go:141] libmachine: (kindnet-605794) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0731 21:51:34.160466 1157553 main.go:141] libmachine: (kindnet-605794) DBG |     <dhcp>
	I0731 21:51:34.160479 1157553 main.go:141] libmachine: (kindnet-605794) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0731 21:51:34.160486 1157553 main.go:141] libmachine: (kindnet-605794) DBG |     </dhcp>
	I0731 21:51:34.160500 1157553 main.go:141] libmachine: (kindnet-605794) DBG |   </ip>
	I0731 21:51:34.160506 1157553 main.go:141] libmachine: (kindnet-605794) DBG |   
	I0731 21:51:34.160512 1157553 main.go:141] libmachine: (kindnet-605794) DBG | </network>
	I0731 21:51:34.160516 1157553 main.go:141] libmachine: (kindnet-605794) DBG | 
	I0731 21:51:34.165984 1157553 main.go:141] libmachine: (kindnet-605794) DBG | trying to create private KVM network mk-kindnet-605794 192.168.61.0/24...
	I0731 21:51:34.259212 1157553 main.go:141] libmachine: (kindnet-605794) DBG | private KVM network mk-kindnet-605794 192.168.61.0/24 created
	I0731 21:51:34.259251 1157553 main.go:141] libmachine: (kindnet-605794) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794 ...
	I0731 21:51:34.259275 1157553 main.go:141] libmachine: (kindnet-605794) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 21:51:34.259334 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:34.259244 1157620 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:51:34.259392 1157553 main.go:141] libmachine: (kindnet-605794) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 21:51:34.553201 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:34.553069 1157620 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa...
	I0731 21:51:34.680485 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:34.680335 1157620 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/kindnet-605794.rawdisk...
	I0731 21:51:34.680527 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Writing magic tar header
	I0731 21:51:34.680548 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Writing SSH key tar header
	I0731 21:51:34.680560 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:34.680494 1157620 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794 ...
	I0731 21:51:34.680682 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794
	I0731 21:51:34.680715 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 21:51:34.680728 1157553 main.go:141] libmachine: (kindnet-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794 (perms=drwx------)
	I0731 21:51:34.680742 1157553 main.go:141] libmachine: (kindnet-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 21:51:34.680753 1157553 main.go:141] libmachine: (kindnet-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 21:51:34.680767 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:51:34.680793 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 21:51:34.680822 1157553 main.go:141] libmachine: (kindnet-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 21:51:34.680838 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 21:51:34.680865 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Checking permissions on dir: /home/jenkins
	I0731 21:51:34.680874 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Checking permissions on dir: /home
	I0731 21:51:34.680884 1157553 main.go:141] libmachine: (kindnet-605794) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 21:51:34.680902 1157553 main.go:141] libmachine: (kindnet-605794) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 21:51:34.680909 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Skipping /home - not owner
	I0731 21:51:34.680916 1157553 main.go:141] libmachine: (kindnet-605794) Creating domain...
	I0731 21:51:34.682256 1157553 main.go:141] libmachine: (kindnet-605794) define libvirt domain using xml: 
	I0731 21:51:34.682278 1157553 main.go:141] libmachine: (kindnet-605794) <domain type='kvm'>
	I0731 21:51:34.682289 1157553 main.go:141] libmachine: (kindnet-605794)   <name>kindnet-605794</name>
	I0731 21:51:34.682297 1157553 main.go:141] libmachine: (kindnet-605794)   <memory unit='MiB'>3072</memory>
	I0731 21:51:34.682307 1157553 main.go:141] libmachine: (kindnet-605794)   <vcpu>2</vcpu>
	I0731 21:51:34.682318 1157553 main.go:141] libmachine: (kindnet-605794)   <features>
	I0731 21:51:34.682326 1157553 main.go:141] libmachine: (kindnet-605794)     <acpi/>
	I0731 21:51:34.682341 1157553 main.go:141] libmachine: (kindnet-605794)     <apic/>
	I0731 21:51:34.682351 1157553 main.go:141] libmachine: (kindnet-605794)     <pae/>
	I0731 21:51:34.682358 1157553 main.go:141] libmachine: (kindnet-605794)     
	I0731 21:51:34.682369 1157553 main.go:141] libmachine: (kindnet-605794)   </features>
	I0731 21:51:34.682394 1157553 main.go:141] libmachine: (kindnet-605794)   <cpu mode='host-passthrough'>
	I0731 21:51:34.682406 1157553 main.go:141] libmachine: (kindnet-605794)   
	I0731 21:51:34.682413 1157553 main.go:141] libmachine: (kindnet-605794)   </cpu>
	I0731 21:51:34.682424 1157553 main.go:141] libmachine: (kindnet-605794)   <os>
	I0731 21:51:34.682432 1157553 main.go:141] libmachine: (kindnet-605794)     <type>hvm</type>
	I0731 21:51:34.682463 1157553 main.go:141] libmachine: (kindnet-605794)     <boot dev='cdrom'/>
	I0731 21:51:34.682477 1157553 main.go:141] libmachine: (kindnet-605794)     <boot dev='hd'/>
	I0731 21:51:34.682498 1157553 main.go:141] libmachine: (kindnet-605794)     <bootmenu enable='no'/>
	I0731 21:51:34.682509 1157553 main.go:141] libmachine: (kindnet-605794)   </os>
	I0731 21:51:34.682518 1157553 main.go:141] libmachine: (kindnet-605794)   <devices>
	I0731 21:51:34.682530 1157553 main.go:141] libmachine: (kindnet-605794)     <disk type='file' device='cdrom'>
	I0731 21:51:34.682545 1157553 main.go:141] libmachine: (kindnet-605794)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/boot2docker.iso'/>
	I0731 21:51:34.682555 1157553 main.go:141] libmachine: (kindnet-605794)       <target dev='hdc' bus='scsi'/>
	I0731 21:51:34.682563 1157553 main.go:141] libmachine: (kindnet-605794)       <readonly/>
	I0731 21:51:34.682572 1157553 main.go:141] libmachine: (kindnet-605794)     </disk>
	I0731 21:51:34.682579 1157553 main.go:141] libmachine: (kindnet-605794)     <disk type='file' device='disk'>
	I0731 21:51:34.682589 1157553 main.go:141] libmachine: (kindnet-605794)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 21:51:34.682605 1157553 main.go:141] libmachine: (kindnet-605794)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/kindnet-605794.rawdisk'/>
	I0731 21:51:34.682617 1157553 main.go:141] libmachine: (kindnet-605794)       <target dev='hda' bus='virtio'/>
	I0731 21:51:34.682625 1157553 main.go:141] libmachine: (kindnet-605794)     </disk>
	I0731 21:51:34.682637 1157553 main.go:141] libmachine: (kindnet-605794)     <interface type='network'>
	I0731 21:51:34.682649 1157553 main.go:141] libmachine: (kindnet-605794)       <source network='mk-kindnet-605794'/>
	I0731 21:51:34.682659 1157553 main.go:141] libmachine: (kindnet-605794)       <model type='virtio'/>
	I0731 21:51:34.682671 1157553 main.go:141] libmachine: (kindnet-605794)     </interface>
	I0731 21:51:34.682681 1157553 main.go:141] libmachine: (kindnet-605794)     <interface type='network'>
	I0731 21:51:34.682689 1157553 main.go:141] libmachine: (kindnet-605794)       <source network='default'/>
	I0731 21:51:34.682697 1157553 main.go:141] libmachine: (kindnet-605794)       <model type='virtio'/>
	I0731 21:51:34.682705 1157553 main.go:141] libmachine: (kindnet-605794)     </interface>
	I0731 21:51:34.682716 1157553 main.go:141] libmachine: (kindnet-605794)     <serial type='pty'>
	I0731 21:51:34.682725 1157553 main.go:141] libmachine: (kindnet-605794)       <target port='0'/>
	I0731 21:51:34.682736 1157553 main.go:141] libmachine: (kindnet-605794)     </serial>
	I0731 21:51:34.682749 1157553 main.go:141] libmachine: (kindnet-605794)     <console type='pty'>
	I0731 21:51:34.682758 1157553 main.go:141] libmachine: (kindnet-605794)       <target type='serial' port='0'/>
	I0731 21:51:34.682768 1157553 main.go:141] libmachine: (kindnet-605794)     </console>
	I0731 21:51:34.682776 1157553 main.go:141] libmachine: (kindnet-605794)     <rng model='virtio'>
	I0731 21:51:34.682786 1157553 main.go:141] libmachine: (kindnet-605794)       <backend model='random'>/dev/random</backend>
	I0731 21:51:34.682791 1157553 main.go:141] libmachine: (kindnet-605794)     </rng>
	I0731 21:51:34.682797 1157553 main.go:141] libmachine: (kindnet-605794)     
	I0731 21:51:34.682806 1157553 main.go:141] libmachine: (kindnet-605794)     
	I0731 21:51:34.682819 1157553 main.go:141] libmachine: (kindnet-605794)   </devices>
	I0731 21:51:34.682828 1157553 main.go:141] libmachine: (kindnet-605794) </domain>
	I0731 21:51:34.682838 1157553 main.go:141] libmachine: (kindnet-605794) 
	I0731 21:51:34.688194 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:a8:1d:a3 in network default
	I0731 21:51:34.688963 1157553 main.go:141] libmachine: (kindnet-605794) Ensuring networks are active...
	I0731 21:51:34.688985 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:34.689758 1157553 main.go:141] libmachine: (kindnet-605794) Ensuring network default is active
	I0731 21:51:34.690211 1157553 main.go:141] libmachine: (kindnet-605794) Ensuring network mk-kindnet-605794 is active
	I0731 21:51:34.690838 1157553 main.go:141] libmachine: (kindnet-605794) Getting domain xml...
	I0731 21:51:34.691826 1157553 main.go:141] libmachine: (kindnet-605794) Creating domain...
	I0731 21:51:36.239121 1157553 main.go:141] libmachine: (kindnet-605794) Waiting to get IP...
	I0731 21:51:36.239935 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:36.240518 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:36.240551 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:36.240485 1157620 retry.go:31] will retry after 265.497898ms: waiting for machine to come up
	I0731 21:51:36.508216 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:36.508814 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:36.508842 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:36.508720 1157620 retry.go:31] will retry after 250.423462ms: waiting for machine to come up
	I0731 21:51:36.761630 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:36.762331 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:36.762356 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:36.762236 1157620 retry.go:31] will retry after 420.925596ms: waiting for machine to come up
	I0731 21:51:37.185101 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:37.185869 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:37.185901 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:37.185817 1157620 retry.go:31] will retry after 573.385682ms: waiting for machine to come up
	I0731 21:51:34.989764 1155232 pod_ready.go:102] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"False"
	I0731 21:51:37.357831 1155232 pod_ready.go:102] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"False"
	I0731 21:51:37.538673 1156100 crio.go:462] duration metric: took 1.520713228s to copy over tarball
	I0731 21:51:37.538764 1156100 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:51:40.361271 1156100 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.822475252s)
	I0731 21:51:40.361368 1156100 crio.go:469] duration metric: took 2.822656535s to extract the tarball
	I0731 21:51:40.361388 1156100 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:51:40.405675 1156100 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:51:40.461962 1156100 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:51:40.462001 1156100 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:51:40.462013 1156100 kubeadm.go:934] updating node { 192.168.50.144 8443 v1.30.3 crio true true} ...
	I0731 21:51:40.462157 1156100 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-605794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0731 21:51:40.462249 1156100 ssh_runner.go:195] Run: crio config
	I0731 21:51:40.524674 1156100 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0731 21:51:40.524724 1156100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:51:40.524758 1156100 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.144 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-605794 NodeName:custom-flannel-605794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:51:40.524976 1156100 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-605794"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:51:40.525057 1156100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:51:40.538362 1156100 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:51:40.538454 1156100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:51:40.551297 1156100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0731 21:51:40.569758 1156100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:51:40.587851 1156100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0731 21:51:40.605133 1156100 ssh_runner.go:195] Run: grep 192.168.50.144	control-plane.minikube.internal$ /etc/hosts
	I0731 21:51:40.609090 1156100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:51:40.621989 1156100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:51:40.737967 1156100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:51:40.755297 1156100 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794 for IP: 192.168.50.144
	I0731 21:51:40.755327 1156100 certs.go:194] generating shared ca certs ...
	I0731 21:51:40.755352 1156100 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:51:40.755581 1156100 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:51:40.755639 1156100 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:51:40.755655 1156100 certs.go:256] generating profile certs ...
	I0731 21:51:40.755730 1156100 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/client.key
	I0731 21:51:40.755749 1156100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/client.crt with IP's: []
	I0731 21:51:40.840695 1156100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/client.crt ...
	I0731 21:51:40.840728 1156100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/client.crt: {Name:mk4b736d8e6e11cccb30523f0e86727a6b9d86fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:51:40.840909 1156100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/client.key ...
	I0731 21:51:40.840928 1156100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/client.key: {Name:mk74bd0dd5f391adddf46713c9da70cd68fb1b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:51:40.841095 1156100 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.key.27a24ebb
	I0731 21:51:40.841116 1156100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.crt.27a24ebb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.144]
	I0731 21:51:41.103842 1156100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.crt.27a24ebb ...
	I0731 21:51:41.103878 1156100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.crt.27a24ebb: {Name:mk61a3d82add7ced714924674b7906f3d5a1b79c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:51:41.104080 1156100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.key.27a24ebb ...
	I0731 21:51:41.104124 1156100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.key.27a24ebb: {Name:mk837b9ba0d8810788e5c44ad610f9ef337f6a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:51:41.104224 1156100 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.crt.27a24ebb -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.crt
	I0731 21:51:41.104333 1156100 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.key.27a24ebb -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.key
	I0731 21:51:41.104431 1156100 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/proxy-client.key
	I0731 21:51:41.104482 1156100 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/proxy-client.crt with IP's: []
	I0731 21:51:41.334717 1156100 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/proxy-client.crt ...
	I0731 21:51:41.334753 1156100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/proxy-client.crt: {Name:mk96535f6f0615eb218f862c71c1127f8f8b8ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:51:41.334929 1156100 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/proxy-client.key ...
	I0731 21:51:41.334948 1156100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/proxy-client.key: {Name:mk24461e7ad82c9a00f4d8f700d3ec5379f92041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:51:41.335121 1156100 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:51:41.335161 1156100 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:51:41.335170 1156100 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:51:41.335193 1156100 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:51:41.335219 1156100 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:51:41.335239 1156100 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:51:41.335275 1156100 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:51:41.335855 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:51:41.364499 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:51:41.391394 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:51:41.420476 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:51:41.467976 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:51:41.493487 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:51:41.524496 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:51:41.632188 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/custom-flannel-605794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:51:41.667761 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:51:41.701608 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:51:41.735917 1156100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:51:41.770201 1156100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:51:41.794320 1156100 ssh_runner.go:195] Run: openssl version
	I0731 21:51:41.801953 1156100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:51:41.813987 1156100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:51:41.819551 1156100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:51:41.819621 1156100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:51:41.827351 1156100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:51:41.842020 1156100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:51:41.854927 1156100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:51:41.859464 1156100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:51:41.859558 1156100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:51:41.865601 1156100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:51:41.877378 1156100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:51:41.888929 1156100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:51:41.893565 1156100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:51:41.893641 1156100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:51:41.899627 1156100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:51:41.910761 1156100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:51:41.916019 1156100 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 21:51:41.916080 1156100 kubeadm.go:392] StartCluster: {Name:custom-flannel-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:custom-flannel-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:51:41.916236 1156100 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:51:41.916300 1156100 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:51:41.963346 1156100 cri.go:89] found id: ""
	I0731 21:51:41.963435 1156100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:51:41.974031 1156100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:51:41.984637 1156100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:51:41.994999 1156100 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:51:41.995027 1156100 kubeadm.go:157] found existing configuration files:
	
	I0731 21:51:41.995090 1156100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:51:42.004759 1156100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:51:42.004837 1156100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:51:42.014779 1156100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:51:42.025802 1156100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:51:42.025906 1156100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:51:42.040498 1156100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:51:42.049800 1156100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:51:42.049876 1156100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:51:42.061474 1156100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:51:42.072244 1156100 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:51:42.072320 1156100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:51:42.082826 1156100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:51:42.143327 1156100 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:51:42.143399 1156100 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:51:42.303748 1156100 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:51:42.303910 1156100 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:51:42.304021 1156100 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:51:42.515565 1156100 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:51:37.761137 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:37.761874 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:37.761898 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:37.761822 1157620 retry.go:31] will retry after 572.525678ms: waiting for machine to come up
	I0731 21:51:38.335757 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:38.336337 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:38.336367 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:38.336306 1157620 retry.go:31] will retry after 619.489905ms: waiting for machine to come up
	I0731 21:51:38.957122 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:38.957637 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:38.957668 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:38.957582 1157620 retry.go:31] will retry after 845.803734ms: waiting for machine to come up
	I0731 21:51:39.804710 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:39.805286 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:39.805317 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:39.805219 1157620 retry.go:31] will retry after 923.082747ms: waiting for machine to come up
	I0731 21:51:40.730521 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:40.731088 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:40.731112 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:40.731059 1157620 retry.go:31] will retry after 1.407209176s: waiting for machine to come up
	I0731 21:51:42.139661 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:42.140236 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:42.140270 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:42.140183 1157620 retry.go:31] will retry after 2.311591337s: waiting for machine to come up
	I0731 21:51:39.493123 1155232 pod_ready.go:102] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"False"
	I0731 21:51:42.558175 1155232 pod_ready.go:102] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"False"
	I0731 21:51:42.607624 1156100 out.go:204]   - Generating certificates and keys ...
	I0731 21:51:42.607778 1156100 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:51:42.607879 1156100 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:51:42.768177 1156100 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 21:51:42.928612 1156100 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 21:51:43.173358 1156100 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 21:51:43.345050 1156100 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 21:51:43.622161 1156100 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 21:51:43.622369 1156100 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-605794 localhost] and IPs [192.168.50.144 127.0.0.1 ::1]
	I0731 21:51:43.717715 1156100 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 21:51:43.717878 1156100 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-605794 localhost] and IPs [192.168.50.144 127.0.0.1 ::1]
	I0731 21:51:44.121450 1156100 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 21:51:44.254366 1156100 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 21:51:44.548365 1156100 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 21:51:44.548659 1156100 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:51:44.681181 1156100 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:51:44.867342 1156100 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:51:45.309818 1156100 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:51:45.408085 1156100 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:51:45.496269 1156100 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:51:45.496799 1156100 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:51:45.501998 1156100 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:51:45.504167 1156100 out.go:204]   - Booting up control plane ...
	I0731 21:51:45.504310 1156100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:51:45.504418 1156100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:51:45.504523 1156100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:51:45.521060 1156100 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:51:45.522187 1156100 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:51:45.522262 1156100 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:51:45.665743 1156100 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:51:45.665900 1156100 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:51:46.167473 1156100 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.613138ms
	I0731 21:51:46.167560 1156100 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:51:44.453039 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:44.453661 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:44.453690 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:44.453612 1157620 retry.go:31] will retry after 2.812484085s: waiting for machine to come up
	I0731 21:51:47.269649 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:47.270178 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:47.270209 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:47.270121 1157620 retry.go:31] will retry after 3.137169093s: waiting for machine to come up
	I0731 21:51:44.989725 1155232 pod_ready.go:102] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"False"
	I0731 21:51:47.489115 1155232 pod_ready.go:102] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"False"
	I0731 21:51:51.672553 1156100 kubeadm.go:310] [api-check] The API server is healthy after 5.503326242s
	I0731 21:51:51.686706 1156100 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:51:51.701914 1156100 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:51:51.730002 1156100 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:51:51.730271 1156100 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-605794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:51:51.743332 1156100 kubeadm.go:310] [bootstrap-token] Using token: a1h7u7.9usdktwohlablsit
	I0731 21:51:50.409202 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:50.409735 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:50.409768 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:50.409693 1157620 retry.go:31] will retry after 3.486496225s: waiting for machine to come up
	I0731 21:51:51.744709 1156100 out.go:204]   - Configuring RBAC rules ...
	I0731 21:51:51.744935 1156100 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:51:51.750534 1156100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:51:51.761651 1156100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:51:51.765888 1156100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:51:51.769573 1156100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:51:51.773285 1156100 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:51:52.080867 1156100 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:51:52.529226 1156100 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:51:53.079897 1156100 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:51:53.079965 1156100 kubeadm.go:310] 
	I0731 21:51:53.080060 1156100 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:51:53.080071 1156100 kubeadm.go:310] 
	I0731 21:51:53.080242 1156100 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:51:53.080256 1156100 kubeadm.go:310] 
	I0731 21:51:53.080314 1156100 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:51:53.080408 1156100 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:51:53.080477 1156100 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:51:53.080487 1156100 kubeadm.go:310] 
	I0731 21:51:53.080563 1156100 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:51:53.080574 1156100 kubeadm.go:310] 
	I0731 21:51:53.080641 1156100 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:51:53.080650 1156100 kubeadm.go:310] 
	I0731 21:51:53.080730 1156100 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:51:53.080840 1156100 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:51:53.080968 1156100 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:51:53.080986 1156100 kubeadm.go:310] 
	I0731 21:51:53.081102 1156100 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:51:53.081230 1156100 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:51:53.081242 1156100 kubeadm.go:310] 
	I0731 21:51:53.081355 1156100 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a1h7u7.9usdktwohlablsit \
	I0731 21:51:53.081479 1156100 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:51:53.081513 1156100 kubeadm.go:310] 	--control-plane 
	I0731 21:51:53.081519 1156100 kubeadm.go:310] 
	I0731 21:51:53.081622 1156100 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:51:53.081640 1156100 kubeadm.go:310] 
	I0731 21:51:53.081766 1156100 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a1h7u7.9usdktwohlablsit \
	I0731 21:51:53.081915 1156100 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:51:53.082073 1156100 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:51:53.082100 1156100 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0731 21:51:53.084553 1156100 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0731 21:51:49.489934 1155232 pod_ready.go:102] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"False"
	I0731 21:51:51.488271 1155232 pod_ready.go:92] pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace has status "Ready":"True"
	I0731 21:51:51.488309 1155232 pod_ready.go:81] duration metric: took 20.506952911s for pod "calico-kube-controllers-564985c589-jtkqc" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:51.488324 1155232 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-vslzw" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.496123 1155232 pod_ready.go:92] pod "calico-node-vslzw" in "kube-system" namespace has status "Ready":"True"
	I0731 21:51:53.496157 1155232 pod_ready.go:81] duration metric: took 2.007823958s for pod "calico-node-vslzw" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.496173 1155232 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-b7ck7" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.503262 1155232 pod_ready.go:92] pod "coredns-7db6d8ff4d-b7ck7" in "kube-system" namespace has status "Ready":"True"
	I0731 21:51:53.503291 1155232 pod_ready.go:81] duration metric: took 7.100064ms for pod "coredns-7db6d8ff4d-b7ck7" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.503303 1155232 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.508538 1155232 pod_ready.go:92] pod "etcd-calico-605794" in "kube-system" namespace has status "Ready":"True"
	I0731 21:51:53.508571 1155232 pod_ready.go:81] duration metric: took 5.260066ms for pod "etcd-calico-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.508583 1155232 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.513985 1155232 pod_ready.go:92] pod "kube-apiserver-calico-605794" in "kube-system" namespace has status "Ready":"True"
	I0731 21:51:53.514009 1155232 pod_ready.go:81] duration metric: took 5.418245ms for pod "kube-apiserver-calico-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.514019 1155232 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.520037 1155232 pod_ready.go:92] pod "kube-controller-manager-calico-605794" in "kube-system" namespace has status "Ready":"True"
	I0731 21:51:53.520069 1155232 pod_ready.go:81] duration metric: took 6.042142ms for pod "kube-controller-manager-calico-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.520083 1155232 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-zzhg9" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.892225 1155232 pod_ready.go:92] pod "kube-proxy-zzhg9" in "kube-system" namespace has status "Ready":"True"
	I0731 21:51:53.892263 1155232 pod_ready.go:81] duration metric: took 372.14916ms for pod "kube-proxy-zzhg9" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:53.892277 1155232 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:54.292082 1155232 pod_ready.go:92] pod "kube-scheduler-calico-605794" in "kube-system" namespace has status "Ready":"True"
	I0731 21:51:54.292121 1155232 pod_ready.go:81] duration metric: took 399.835794ms for pod "kube-scheduler-calico-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:51:54.292136 1155232 pod_ready.go:38] duration metric: took 23.318686471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:51:54.292157 1155232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:51:54.292230 1155232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:51:54.312394 1155232 api_server.go:72] duration metric: took 32.186375632s to wait for apiserver process to appear ...
	I0731 21:51:54.312432 1155232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:51:54.312458 1155232 api_server.go:253] Checking apiserver healthz at https://192.168.72.131:8443/healthz ...
	I0731 21:51:54.317127 1155232 api_server.go:279] https://192.168.72.131:8443/healthz returned 200:
	ok
	I0731 21:51:54.318239 1155232 api_server.go:141] control plane version: v1.30.3
	I0731 21:51:54.318267 1155232 api_server.go:131] duration metric: took 5.826707ms to wait for apiserver health ...
	I0731 21:51:54.318278 1155232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:51:54.496111 1155232 system_pods.go:59] 9 kube-system pods found
	I0731 21:51:54.496159 1155232 system_pods.go:61] "calico-kube-controllers-564985c589-jtkqc" [9750bfe8-3082-470a-a3a4-3254b6a5591b] Running
	I0731 21:51:54.496166 1155232 system_pods.go:61] "calico-node-vslzw" [eee7a146-0793-4343-9815-9094f9562ba1] Running
	I0731 21:51:54.496178 1155232 system_pods.go:61] "coredns-7db6d8ff4d-b7ck7" [15a4ed72-027d-4fd4-b89d-5184d36c39d7] Running
	I0731 21:51:54.496181 1155232 system_pods.go:61] "etcd-calico-605794" [e159a41b-3eb1-4d3a-abf3-172f6028217f] Running
	I0731 21:51:54.496184 1155232 system_pods.go:61] "kube-apiserver-calico-605794" [ef26f8e7-0ebf-49ab-9471-b0d2a751f816] Running
	I0731 21:51:54.496187 1155232 system_pods.go:61] "kube-controller-manager-calico-605794" [e2646cd4-ec7b-49ce-be94-300b73fb75d0] Running
	I0731 21:51:54.496189 1155232 system_pods.go:61] "kube-proxy-zzhg9" [df40d0dc-6af8-4dda-b65f-8d888aac7204] Running
	I0731 21:51:54.496192 1155232 system_pods.go:61] "kube-scheduler-calico-605794" [67c439d9-150a-4677-b4b2-d8112c27fa66] Running
	I0731 21:51:54.496196 1155232 system_pods.go:61] "storage-provisioner" [3a3f0e5c-0022-4f94-b9c7-d82b9379074a] Running
	I0731 21:51:54.496207 1155232 system_pods.go:74] duration metric: took 177.923337ms to wait for pod list to return data ...
	I0731 21:51:54.496216 1155232 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:51:54.692703 1155232 default_sa.go:45] found service account: "default"
	I0731 21:51:54.692732 1155232 default_sa.go:55] duration metric: took 196.509412ms for default service account to be created ...
	I0731 21:51:54.692742 1155232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:51:54.895995 1155232 system_pods.go:86] 9 kube-system pods found
	I0731 21:51:54.896028 1155232 system_pods.go:89] "calico-kube-controllers-564985c589-jtkqc" [9750bfe8-3082-470a-a3a4-3254b6a5591b] Running
	I0731 21:51:54.896033 1155232 system_pods.go:89] "calico-node-vslzw" [eee7a146-0793-4343-9815-9094f9562ba1] Running
	I0731 21:51:54.896038 1155232 system_pods.go:89] "coredns-7db6d8ff4d-b7ck7" [15a4ed72-027d-4fd4-b89d-5184d36c39d7] Running
	I0731 21:51:54.896041 1155232 system_pods.go:89] "etcd-calico-605794" [e159a41b-3eb1-4d3a-abf3-172f6028217f] Running
	I0731 21:51:54.896045 1155232 system_pods.go:89] "kube-apiserver-calico-605794" [ef26f8e7-0ebf-49ab-9471-b0d2a751f816] Running
	I0731 21:51:54.896048 1155232 system_pods.go:89] "kube-controller-manager-calico-605794" [e2646cd4-ec7b-49ce-be94-300b73fb75d0] Running
	I0731 21:51:54.896052 1155232 system_pods.go:89] "kube-proxy-zzhg9" [df40d0dc-6af8-4dda-b65f-8d888aac7204] Running
	I0731 21:51:54.896056 1155232 system_pods.go:89] "kube-scheduler-calico-605794" [67c439d9-150a-4677-b4b2-d8112c27fa66] Running
	I0731 21:51:54.896059 1155232 system_pods.go:89] "storage-provisioner" [3a3f0e5c-0022-4f94-b9c7-d82b9379074a] Running
	I0731 21:51:54.896064 1155232 system_pods.go:126] duration metric: took 203.317418ms to wait for k8s-apps to be running ...
	I0731 21:51:54.896071 1155232 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:51:54.896139 1155232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:51:54.911857 1155232 system_svc.go:56] duration metric: took 15.768916ms WaitForService to wait for kubelet
	I0731 21:51:54.911898 1155232 kubeadm.go:582] duration metric: took 32.785896409s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:51:54.911925 1155232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:51:55.092499 1155232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:51:55.092536 1155232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:51:55.092550 1155232 node_conditions.go:105] duration metric: took 180.619167ms to run NodePressure ...
	I0731 21:51:55.092564 1155232 start.go:241] waiting for startup goroutines ...
	I0731 21:51:55.092581 1155232 start.go:246] waiting for cluster config update ...
	I0731 21:51:55.092622 1155232 start.go:255] writing updated cluster config ...
	I0731 21:51:55.092955 1155232 ssh_runner.go:195] Run: rm -f paused
	I0731 21:51:55.144536 1155232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:51:55.147043 1155232 out.go:177] * Done! kubectl is now configured to use "calico-605794" cluster and "default" namespace by default
	I0731 21:51:53.085890 1156100 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 21:51:53.085945 1156100 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0731 21:51:53.091313 1156100 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0731 21:51:53.091352 1156100 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0731 21:51:53.121240 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 21:51:53.581099 1156100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:51:53.581204 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:53.581208 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-605794 minikube.k8s.io/updated_at=2024_07_31T21_51_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=custom-flannel-605794 minikube.k8s.io/primary=true
	I0731 21:51:53.730125 1156100 ops.go:34] apiserver oom_adj: -16
	I0731 21:51:53.730144 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:54.230918 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:54.730990 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:55.230566 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:55.730292 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:56.230539 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:53.900070 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:53.900573 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find current IP address of domain kindnet-605794 in network mk-kindnet-605794
	I0731 21:51:53.900597 1157553 main.go:141] libmachine: (kindnet-605794) DBG | I0731 21:51:53.900530 1157620 retry.go:31] will retry after 3.901584162s: waiting for machine to come up
	I0731 21:51:56.730552 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:57.230853 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:57.731094 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:58.230218 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:58.731182 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:59.231152 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:59.730593 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:00.230193 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:00.730638 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:01.230881 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:51:57.805299 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:57.805907 1157553 main.go:141] libmachine: (kindnet-605794) Found IP for machine: 192.168.61.151
	I0731 21:51:57.805942 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has current primary IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:57.805952 1157553 main.go:141] libmachine: (kindnet-605794) Reserving static IP address...
	I0731 21:51:57.806337 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find host DHCP lease matching {name: "kindnet-605794", mac: "52:54:00:60:d3:e6", ip: "192.168.61.151"} in network mk-kindnet-605794
	I0731 21:51:57.893961 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Getting to WaitForSSH function...
	I0731 21:51:57.893996 1157553 main.go:141] libmachine: (kindnet-605794) Reserved static IP address: 192.168.61.151
	I0731 21:51:57.894010 1157553 main.go:141] libmachine: (kindnet-605794) Waiting for SSH to be available...
	I0731 21:51:57.897258 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:51:57.897611 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794
	I0731 21:51:57.897641 1157553 main.go:141] libmachine: (kindnet-605794) DBG | unable to find defined IP address of network mk-kindnet-605794 interface with MAC address 52:54:00:60:d3:e6
	I0731 21:51:57.897848 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Using SSH client type: external
	I0731 21:51:57.897875 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa (-rw-------)
	I0731 21:51:57.897901 1157553 main.go:141] libmachine: (kindnet-605794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:51:57.897917 1157553 main.go:141] libmachine: (kindnet-605794) DBG | About to run SSH command:
	I0731 21:51:57.897929 1157553 main.go:141] libmachine: (kindnet-605794) DBG | exit 0
	I0731 21:51:57.902116 1157553 main.go:141] libmachine: (kindnet-605794) DBG | SSH cmd err, output: exit status 255: 
	I0731 21:51:57.902142 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 21:51:57.902149 1157553 main.go:141] libmachine: (kindnet-605794) DBG | command : exit 0
	I0731 21:51:57.902154 1157553 main.go:141] libmachine: (kindnet-605794) DBG | err     : exit status 255
	I0731 21:51:57.902162 1157553 main.go:141] libmachine: (kindnet-605794) DBG | output  : 
	I0731 21:52:00.903350 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Getting to WaitForSSH function...
	I0731 21:52:00.906089 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:00.906662 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:00.906688 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:00.906804 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Using SSH client type: external
	I0731 21:52:00.906843 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa (-rw-------)
	I0731 21:52:00.906873 1157553 main.go:141] libmachine: (kindnet-605794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:52:00.906891 1157553 main.go:141] libmachine: (kindnet-605794) DBG | About to run SSH command:
	I0731 21:52:00.906908 1157553 main.go:141] libmachine: (kindnet-605794) DBG | exit 0
	I0731 21:52:01.035998 1157553 main.go:141] libmachine: (kindnet-605794) DBG | SSH cmd err, output: <nil>: 
	I0731 21:52:01.036430 1157553 main.go:141] libmachine: (kindnet-605794) KVM machine creation complete!
	I0731 21:52:01.036738 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetConfigRaw
	I0731 21:52:01.037312 1157553 main.go:141] libmachine: (kindnet-605794) Calling .DriverName
	I0731 21:52:01.037547 1157553 main.go:141] libmachine: (kindnet-605794) Calling .DriverName
	I0731 21:52:01.037727 1157553 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 21:52:01.037752 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetState
	I0731 21:52:01.039195 1157553 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 21:52:01.039209 1157553 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 21:52:01.039215 1157553 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 21:52:01.039220 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:01.041789 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.042228 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:01.042255 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.042502 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:01.042746 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.042990 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.043128 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:01.043292 1157553 main.go:141] libmachine: Using SSH client type: native
	I0731 21:52:01.043484 1157553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0731 21:52:01.043494 1157553 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 21:52:01.151753 1157553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:52:01.151787 1157553 main.go:141] libmachine: Detecting the provisioner...
	I0731 21:52:01.151802 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:01.155075 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.155476 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:01.155508 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.155915 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:01.156221 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.156449 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.156689 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:01.156935 1157553 main.go:141] libmachine: Using SSH client type: native
	I0731 21:52:01.157193 1157553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0731 21:52:01.157213 1157553 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 21:52:01.269103 1157553 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 21:52:01.269165 1157553 main.go:141] libmachine: found compatible host: buildroot
	I0731 21:52:01.269175 1157553 main.go:141] libmachine: Provisioning with buildroot...
	I0731 21:52:01.269187 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetMachineName
	I0731 21:52:01.269456 1157553 buildroot.go:166] provisioning hostname "kindnet-605794"
	I0731 21:52:01.269476 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetMachineName
	I0731 21:52:01.269677 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:01.273232 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.273703 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:01.273732 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.273947 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:01.274175 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.274373 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.274570 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:01.274731 1157553 main.go:141] libmachine: Using SSH client type: native
	I0731 21:52:01.274918 1157553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0731 21:52:01.274935 1157553 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-605794 && echo "kindnet-605794" | sudo tee /etc/hostname
	I0731 21:52:01.409074 1157553 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-605794
	
	I0731 21:52:01.409124 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:01.412582 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.413028 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:01.413054 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.413456 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:01.413686 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.413909 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.414109 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:01.414326 1157553 main.go:141] libmachine: Using SSH client type: native
	I0731 21:52:01.414578 1157553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0731 21:52:01.414605 1157553 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-605794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-605794/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-605794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:52:01.538657 1157553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:52:01.538719 1157553 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:52:01.538754 1157553 buildroot.go:174] setting up certificates
	I0731 21:52:01.538768 1157553 provision.go:84] configureAuth start
	I0731 21:52:01.538785 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetMachineName
	I0731 21:52:01.539135 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetIP
	I0731 21:52:01.542738 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.543194 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:01.543226 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.543416 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:01.546298 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.546744 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:01.546776 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.547006 1157553 provision.go:143] copyHostCerts
	I0731 21:52:01.547084 1157553 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:52:01.547099 1157553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:52:01.547175 1157553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:52:01.547327 1157553 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:52:01.547344 1157553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:52:01.547381 1157553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:52:01.547491 1157553 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:52:01.547502 1157553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:52:01.547522 1157553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:52:01.547578 1157553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.kindnet-605794 san=[127.0.0.1 192.168.61.151 kindnet-605794 localhost minikube]
	I0731 21:52:01.741931 1157553 provision.go:177] copyRemoteCerts
	I0731 21:52:01.742025 1157553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:52:01.742075 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:01.745640 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.746149 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:01.746186 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.746486 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:01.746709 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.746906 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:01.747111 1157553 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa Username:docker}
	I0731 21:52:01.834573 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:52:01.859325 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0731 21:52:01.883775 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:52:01.914427 1157553 provision.go:87] duration metric: took 375.641311ms to configureAuth
	I0731 21:52:01.914462 1157553 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:52:01.914681 1157553 config.go:182] Loaded profile config "kindnet-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:52:01.914767 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:01.917990 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.918389 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:01.918418 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:01.918635 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:01.918880 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.919071 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:01.919242 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:01.919485 1157553 main.go:141] libmachine: Using SSH client type: native
	I0731 21:52:01.919726 1157553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0731 21:52:01.919744 1157553 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:52:02.205595 1157553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:52:02.205636 1157553 main.go:141] libmachine: Checking connection to Docker...
	I0731 21:52:02.205648 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetURL
	I0731 21:52:02.207067 1157553 main.go:141] libmachine: (kindnet-605794) DBG | Using libvirt version 6000000
	I0731 21:52:02.209460 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.209873 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:02.209907 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.210063 1157553 main.go:141] libmachine: Docker is up and running!
	I0731 21:52:02.210082 1157553 main.go:141] libmachine: Reticulating splines...
	I0731 21:52:02.210091 1157553 client.go:171] duration metric: took 28.058267074s to LocalClient.Create
	I0731 21:52:02.210134 1157553 start.go:167] duration metric: took 28.058351384s to libmachine.API.Create "kindnet-605794"
	I0731 21:52:02.210153 1157553 start.go:293] postStartSetup for "kindnet-605794" (driver="kvm2")
	I0731 21:52:02.210167 1157553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:52:02.210194 1157553 main.go:141] libmachine: (kindnet-605794) Calling .DriverName
	I0731 21:52:02.210491 1157553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:52:02.210526 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:02.213378 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.213813 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:02.213846 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.214006 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:02.214236 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:02.214456 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:02.214662 1157553 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa Username:docker}
	I0731 21:52:02.304588 1157553 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:52:02.309279 1157553 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:52:02.309315 1157553 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:52:02.309387 1157553 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:52:02.309465 1157553 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:52:02.309569 1157553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:52:02.321748 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:52:02.348270 1157553 start.go:296] duration metric: took 138.092825ms for postStartSetup
	I0731 21:52:02.348435 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetConfigRaw
	I0731 21:52:02.349207 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetIP
	I0731 21:52:02.352113 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.352516 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:02.352547 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.352769 1157553 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/config.json ...
	I0731 21:52:02.353003 1157553 start.go:128] duration metric: took 28.227870958s to createHost
	I0731 21:52:02.353036 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:02.355459 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.355779 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:02.355817 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.355935 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:02.356176 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:02.356357 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:02.356548 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:02.356757 1157553 main.go:141] libmachine: Using SSH client type: native
	I0731 21:52:02.356937 1157553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0731 21:52:02.356948 1157553 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:52:02.466225 1157553 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722462722.444061811
	
	I0731 21:52:02.466290 1157553 fix.go:216] guest clock: 1722462722.444061811
	I0731 21:52:02.466304 1157553 fix.go:229] Guest: 2024-07-31 21:52:02.444061811 +0000 UTC Remote: 2024-07-31 21:52:02.353019654 +0000 UTC m=+34.862312734 (delta=91.042157ms)
	I0731 21:52:02.466337 1157553 fix.go:200] guest clock delta is within tolerance: 91.042157ms
	I0731 21:52:02.466345 1157553 start.go:83] releasing machines lock for "kindnet-605794", held for 28.341417824s
	I0731 21:52:02.466382 1157553 main.go:141] libmachine: (kindnet-605794) Calling .DriverName
	I0731 21:52:02.466742 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetIP
	I0731 21:52:02.470163 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.470556 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:02.470592 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.470852 1157553 main.go:141] libmachine: (kindnet-605794) Calling .DriverName
	I0731 21:52:02.471415 1157553 main.go:141] libmachine: (kindnet-605794) Calling .DriverName
	I0731 21:52:02.471646 1157553 main.go:141] libmachine: (kindnet-605794) Calling .DriverName
	I0731 21:52:02.471755 1157553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:52:02.471804 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:02.472155 1157553 ssh_runner.go:195] Run: cat /version.json
	I0731 21:52:02.472184 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHHostname
	I0731 21:52:02.475012 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.475169 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.475451 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:02.475497 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.475754 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:02.475781 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:02.475895 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:02.476139 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHPort
	I0731 21:52:02.476155 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:02.476372 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:02.476407 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHKeyPath
	I0731 21:52:02.476589 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetSSHUsername
	I0731 21:52:02.476617 1157553 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa Username:docker}
	I0731 21:52:02.476754 1157553 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/kindnet-605794/id_rsa Username:docker}
	I0731 21:52:02.561703 1157553 ssh_runner.go:195] Run: systemctl --version
	I0731 21:52:02.585349 1157553 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:52:02.751106 1157553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:52:02.758438 1157553 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:52:02.758532 1157553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:52:02.780884 1157553 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:52:02.780916 1157553 start.go:495] detecting cgroup driver to use...
	I0731 21:52:02.780993 1157553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:52:02.800312 1157553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:52:02.816420 1157553 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:52:02.816504 1157553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:52:02.837646 1157553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:52:02.855371 1157553 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:52:02.988003 1157553 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:52:03.151781 1157553 docker.go:233] disabling docker service ...
	I0731 21:52:03.151856 1157553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:52:03.168654 1157553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:52:03.182732 1157553 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:52:03.335089 1157553 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:52:03.486226 1157553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:52:03.501588 1157553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:52:03.520147 1157553 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:52:03.520222 1157553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:52:03.531268 1157553 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:52:03.531357 1157553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:52:03.542078 1157553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:52:03.552995 1157553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:52:03.565375 1157553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:52:03.576031 1157553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:52:03.587037 1157553 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:52:03.610419 1157553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:52:03.622119 1157553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:52:03.632218 1157553 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:52:03.632298 1157553 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:52:03.646299 1157553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:52:03.657097 1157553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:52:03.831836 1157553 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:52:03.971587 1157553 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:52:03.971674 1157553 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:52:03.976935 1157553 start.go:563] Will wait 60s for crictl version
	I0731 21:52:03.977022 1157553 ssh_runner.go:195] Run: which crictl
	I0731 21:52:03.980865 1157553 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:52:04.019688 1157553 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:52:04.019816 1157553 ssh_runner.go:195] Run: crio --version
	I0731 21:52:04.050273 1157553 ssh_runner.go:195] Run: crio --version
	I0731 21:52:04.090399 1157553 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:52:01.731228 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:02.230207 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:02.730391 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:03.231091 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:03.730769 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:04.230921 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:04.730365 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:05.231150 1156100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:52:05.359354 1156100 kubeadm.go:1113] duration metric: took 11.778238287s to wait for elevateKubeSystemPrivileges
	I0731 21:52:05.359398 1156100 kubeadm.go:394] duration metric: took 23.44332373s to StartCluster
	I0731 21:52:05.359441 1156100 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:05.359526 1156100 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:52:05.361389 1156100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:05.361666 1156100 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.144 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:52:05.361790 1156100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 21:52:05.361886 1156100 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:52:05.361940 1156100 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-605794"
	I0731 21:52:05.361965 1156100 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-605794"
	I0731 21:52:05.361994 1156100 host.go:66] Checking if "custom-flannel-605794" exists ...
	I0731 21:52:05.362050 1156100 config.go:182] Loaded profile config "custom-flannel-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:52:05.362111 1156100 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-605794"
	I0731 21:52:05.362138 1156100 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-605794"
	I0731 21:52:05.362345 1156100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:52:05.362364 1156100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:52:05.362513 1156100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:52:05.362538 1156100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:52:05.363638 1156100 out.go:177] * Verifying Kubernetes components...
	I0731 21:52:05.365208 1156100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:52:05.386549 1156100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0731 21:52:05.389093 1156100 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:52:05.390780 1156100 main.go:141] libmachine: Using API Version  1
	I0731 21:52:05.390816 1156100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:52:05.391353 1156100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:52:05.392079 1156100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:52:05.392172 1156100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:52:05.395310 1156100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I0731 21:52:05.395882 1156100 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:52:05.396459 1156100 main.go:141] libmachine: Using API Version  1
	I0731 21:52:05.396483 1156100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:52:05.396946 1156100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:52:05.397093 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetState
	I0731 21:52:05.400981 1156100 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-605794"
	I0731 21:52:05.401034 1156100 host.go:66] Checking if "custom-flannel-605794" exists ...
	I0731 21:52:05.401417 1156100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:52:05.401464 1156100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:52:05.413541 1156100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36441
	I0731 21:52:05.414189 1156100 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:52:05.414820 1156100 main.go:141] libmachine: Using API Version  1
	I0731 21:52:05.414838 1156100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:52:05.415226 1156100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:52:05.415410 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetState
	I0731 21:52:05.417545 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:52:05.419772 1156100 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:52:05.422715 1156100 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:52:05.422742 1156100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:52:05.422779 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:52:05.423468 1156100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0731 21:52:05.424198 1156100 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:52:05.426152 1156100 main.go:141] libmachine: Using API Version  1
	I0731 21:52:05.426180 1156100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:52:05.427231 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:52:05.427991 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:52:05.428015 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:52:05.428290 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:52:05.428485 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:52:05.428555 1156100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:52:05.428733 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:52:05.429216 1156100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:52:05.429240 1156100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:52:05.429579 1156100 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/custom-flannel-605794/id_rsa Username:docker}
	I0731 21:52:05.450305 1156100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0731 21:52:05.451070 1156100 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:52:05.451945 1156100 main.go:141] libmachine: Using API Version  1
	I0731 21:52:05.451971 1156100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:52:05.452450 1156100 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:52:05.452723 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetState
	I0731 21:52:05.454595 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .DriverName
	I0731 21:52:05.456375 1156100 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:52:05.456396 1156100 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:52:05.456422 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHHostname
	I0731 21:52:05.460012 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:52:05.460671 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:43:07", ip: ""} in network mk-custom-flannel-605794: {Iface:virbr4 ExpiryTime:2024-07-31 22:51:26 +0000 UTC Type:0 Mac:52:54:00:c8:43:07 Iaid: IPaddr:192.168.50.144 Prefix:24 Hostname:custom-flannel-605794 Clientid:01:52:54:00:c8:43:07}
	I0731 21:52:05.460698 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | domain custom-flannel-605794 has defined IP address 192.168.50.144 and MAC address 52:54:00:c8:43:07 in network mk-custom-flannel-605794
	I0731 21:52:05.461031 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHPort
	I0731 21:52:05.464039 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHKeyPath
	I0731 21:52:05.464363 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .GetSSHUsername
	I0731 21:52:05.464601 1156100 sshutil.go:53] new ssh client: &{IP:192.168.50.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/custom-flannel-605794/id_rsa Username:docker}
	I0731 21:52:05.779575 1156100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:52:05.779620 1156100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 21:52:05.809493 1156100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:52:05.998489 1156100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:52:06.341199 1156100 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0731 21:52:06.350054 1156100 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-605794" to be "Ready" ...
	I0731 21:52:06.849769 1156100 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-605794" context rescaled to 1 replicas
	I0731 21:52:06.851260 1156100 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041723746s)
	I0731 21:52:06.851319 1156100 main.go:141] libmachine: Making call to close driver server
	I0731 21:52:06.851341 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .Close
	I0731 21:52:06.851392 1156100 main.go:141] libmachine: Making call to close driver server
	I0731 21:52:06.851415 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .Close
	I0731 21:52:06.851755 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | Closing plugin on server side
	I0731 21:52:06.851785 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | Closing plugin on server side
	I0731 21:52:06.851811 1156100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:52:06.851819 1156100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:52:06.851827 1156100 main.go:141] libmachine: Making call to close driver server
	I0731 21:52:06.851835 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .Close
	I0731 21:52:06.851963 1156100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:52:06.851978 1156100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:52:06.852144 1156100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:52:06.852177 1156100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:52:06.852448 1156100 main.go:141] libmachine: Making call to close driver server
	I0731 21:52:06.852470 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .Close
	I0731 21:52:06.852979 1156100 main.go:141] libmachine: (custom-flannel-605794) DBG | Closing plugin on server side
	I0731 21:52:06.853004 1156100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:52:06.853015 1156100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:52:06.874831 1156100 main.go:141] libmachine: Making call to close driver server
	I0731 21:52:06.874865 1156100 main.go:141] libmachine: (custom-flannel-605794) Calling .Close
	I0731 21:52:06.875206 1156100 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:52:06.875230 1156100 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:52:06.876689 1156100 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 21:52:04.091606 1157553 main.go:141] libmachine: (kindnet-605794) Calling .GetIP
	I0731 21:52:04.094916 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:04.095337 1157553 main.go:141] libmachine: (kindnet-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d3:e6", ip: ""} in network mk-kindnet-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:51:50 +0000 UTC Type:0 Mac:52:54:00:60:d3:e6 Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:kindnet-605794 Clientid:01:52:54:00:60:d3:e6}
	I0731 21:52:04.095368 1157553 main.go:141] libmachine: (kindnet-605794) DBG | domain kindnet-605794 has defined IP address 192.168.61.151 and MAC address 52:54:00:60:d3:e6 in network mk-kindnet-605794
	I0731 21:52:04.095602 1157553 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:52:04.100928 1157553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:52:04.115188 1157553 kubeadm.go:883] updating cluster {Name:kindnet-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:kindnet-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:52:04.115361 1157553 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:52:04.115429 1157553 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:52:04.156163 1157553 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:52:04.156325 1157553 ssh_runner.go:195] Run: which lz4
	I0731 21:52:04.162230 1157553 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:52:04.167718 1157553 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:52:04.167848 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:52:05.762431 1157553 crio.go:462] duration metric: took 1.600247046s to copy over tarball
	I0731 21:52:05.762532 1157553 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:52:08.951356 1157553 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.188789195s)
	I0731 21:52:08.951407 1157553 crio.go:469] duration metric: took 3.18894156s to extract the tarball
	I0731 21:52:08.951419 1157553 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:52:08.990238 1157553 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:52:09.033292 1157553 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:52:09.033323 1157553 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:52:09.033334 1157553 kubeadm.go:934] updating node { 192.168.61.151 8443 v1.30.3 crio true true} ...
	I0731 21:52:09.033468 1157553 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-605794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0731 21:52:09.033542 1157553 ssh_runner.go:195] Run: crio config
	I0731 21:52:09.080506 1157553 cni.go:84] Creating CNI manager for "kindnet"
	I0731 21:52:09.080532 1157553 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:52:09.080557 1157553 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-605794 NodeName:kindnet-605794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:52:09.080728 1157553 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-605794"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:52:09.080814 1157553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:52:09.091248 1157553 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:52:09.091335 1157553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:52:09.101216 1157553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0731 21:52:09.118493 1157553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:52:09.136190 1157553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0731 21:52:09.154069 1157553 ssh_runner.go:195] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I0731 21:52:09.158618 1157553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:52:09.171670 1157553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:52:09.294380 1157553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:52:09.313424 1157553 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794 for IP: 192.168.61.151
	I0731 21:52:09.313451 1157553 certs.go:194] generating shared ca certs ...
	I0731 21:52:09.313468 1157553 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:09.313688 1157553 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:52:09.313762 1157553 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:52:09.313778 1157553 certs.go:256] generating profile certs ...
	I0731 21:52:09.313855 1157553 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/client.key
	I0731 21:52:09.313876 1157553 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/client.crt with IP's: []
	I0731 21:52:09.399422 1157553 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/client.crt ...
	I0731 21:52:09.399460 1157553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/client.crt: {Name:mk6acaca808ea81ad1e32939176027532e45387a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:09.399669 1157553 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/client.key ...
	I0731 21:52:09.399684 1157553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/client.key: {Name:mk6ab2c4404ca8f029361333c9bdde3ba581546f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:09.399793 1157553 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.key.fa76b4cd
	I0731 21:52:09.399812 1157553 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.crt.fa76b4cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.151]
	I0731 21:52:09.483372 1157553 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.crt.fa76b4cd ...
	I0731 21:52:09.483404 1157553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.crt.fa76b4cd: {Name:mk1f50b58a454a0a902b686af7d0fea10b820236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:09.528985 1157553 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.key.fa76b4cd ...
	I0731 21:52:09.529026 1157553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.key.fa76b4cd: {Name:mk7c0e7ea8f3af402b19ec40802395b39027ccee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:09.529152 1157553 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.crt.fa76b4cd -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.crt
	I0731 21:52:09.529227 1157553 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.key.fa76b4cd -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.key
	I0731 21:52:09.529285 1157553 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/proxy-client.key
	I0731 21:52:09.529315 1157553 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/proxy-client.crt with IP's: []
	I0731 21:52:09.772869 1157553 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/proxy-client.crt ...
	I0731 21:52:09.772907 1157553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/proxy-client.crt: {Name:mk043b202b788453950869696e49fbb584320638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:09.773077 1157553 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/proxy-client.key ...
	I0731 21:52:09.773089 1157553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/proxy-client.key: {Name:mkdbd3d54c3c4ea33ba8a3e7f582fddc65c59b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:52:09.773252 1157553 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:52:09.773288 1157553 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:52:09.773298 1157553 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:52:09.773323 1157553 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:52:09.773351 1157553 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:52:09.773372 1157553 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:52:09.773410 1157553 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:52:09.774032 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:52:09.799762 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:52:09.825533 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:52:09.851719 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:52:09.877133 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 21:52:09.902413 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:52:09.931072 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:52:09.970365 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/kindnet-605794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:52:10.043068 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:52:10.069308 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:52:10.094504 1157553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:52:10.120020 1157553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:52:10.137598 1157553 ssh_runner.go:195] Run: openssl version
	I0731 21:52:10.143338 1157553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:52:10.154795 1157553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:52:10.160603 1157553 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:52:10.160716 1157553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:52:10.168353 1157553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:52:10.179447 1157553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:52:10.190521 1157553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:52:10.195682 1157553 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:52:10.195751 1157553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:52:10.201778 1157553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:52:10.213521 1157553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:52:10.225050 1157553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:52:10.229831 1157553 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:52:10.229917 1157553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:52:10.235745 1157553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:52:10.247501 1157553 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:52:10.251743 1157553 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 21:52:10.251817 1157553 kubeadm.go:392] StartCluster: {Name:kindnet-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:kindnet-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:52:10.251890 1157553 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:52:10.251943 1157553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:52:10.291253 1157553 cri.go:89] found id: ""
	I0731 21:52:10.291337 1157553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:52:10.301749 1157553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:52:10.311717 1157553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:52:10.321635 1157553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:52:10.321654 1157553 kubeadm.go:157] found existing configuration files:
	
	I0731 21:52:10.321700 1157553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:52:10.330741 1157553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:52:10.330836 1157553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:52:10.340625 1157553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:52:10.350042 1157553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:52:10.350165 1157553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:52:10.360191 1157553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:52:10.369284 1157553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:52:10.369354 1157553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:52:10.381281 1157553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:52:10.392123 1157553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:52:10.392202 1157553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:52:10.402121 1157553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:52:10.457346 1157553 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:52:10.457435 1157553 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:52:10.588301 1157553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:52:10.588451 1157553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:52:10.588584 1157553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:52:10.791054 1157553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:52:06.877878 1156100 addons.go:510] duration metric: took 1.515983819s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 21:52:08.355000 1156100 node_ready.go:53] node "custom-flannel-605794" has status "Ready":"False"
	I0731 21:52:10.358150 1156100 node_ready.go:53] node "custom-flannel-605794" has status "Ready":"False"
	I0731 21:52:10.924426 1157553 out.go:204]   - Generating certificates and keys ...
	I0731 21:52:10.924599 1157553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:52:10.924705 1157553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:52:10.924803 1157553 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 21:52:11.074528 1157553 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 21:52:11.464746 1157553 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 21:52:11.726383 1157553 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 21:52:11.878852 1157553 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 21:52:11.879158 1157553 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-605794 localhost] and IPs [192.168.61.151 127.0.0.1 ::1]
	I0731 21:52:12.061935 1157553 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 21:52:12.062180 1157553 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-605794 localhost] and IPs [192.168.61.151 127.0.0.1 ::1]
	I0731 21:52:12.469252 1157553 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 21:52:12.741917 1157553 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 21:52:12.950763 1157553 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 21:52:12.950868 1157553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:52:13.042756 1157553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:52:13.163763 1157553 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:52:13.440963 1157553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:52:13.515941 1157553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:52:13.577848 1157553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:52:13.578411 1157553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:52:13.580881 1157553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:52:12.855602 1156100 node_ready.go:53] node "custom-flannel-605794" has status "Ready":"False"
	I0731 21:52:15.353878 1156100 node_ready.go:53] node "custom-flannel-605794" has status "Ready":"False"
	I0731 21:52:13.582644 1157553 out.go:204]   - Booting up control plane ...
	I0731 21:52:13.582759 1157553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:52:13.582876 1157553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:52:13.582970 1157553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:52:13.599008 1157553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:52:13.599751 1157553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:52:13.599828 1157553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:52:13.737904 1157553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:52:13.738096 1157553 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:52:14.241865 1157553 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.049739ms
	I0731 21:52:14.241986 1157553 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:52:19.742665 1157553 kubeadm.go:310] [api-check] The API server is healthy after 5.502371885s
	I0731 21:52:19.759285 1157553 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:52:19.775403 1157553 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:52:19.806150 1157553 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:52:19.806413 1157553 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-605794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:52:19.819995 1157553 kubeadm.go:310] [bootstrap-token] Using token: nw95zl.z2s886xwdaoknoog
	I0731 21:52:19.821345 1157553 out.go:204]   - Configuring RBAC rules ...
	I0731 21:52:19.821499 1157553 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:52:19.830855 1157553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:52:19.841823 1157553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:52:19.845584 1157553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:52:19.849721 1157553 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:52:19.855180 1157553 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:52:20.153404 1157553 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:52:20.609197 1157553 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:52:21.155461 1157553 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:52:21.155492 1157553 kubeadm.go:310] 
	I0731 21:52:21.155571 1157553 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:52:21.155581 1157553 kubeadm.go:310] 
	I0731 21:52:21.155656 1157553 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:52:21.155666 1157553 kubeadm.go:310] 
	I0731 21:52:21.155713 1157553 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:52:21.155780 1157553 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:52:21.155852 1157553 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:52:21.155862 1157553 kubeadm.go:310] 
	I0731 21:52:21.155928 1157553 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:52:21.155939 1157553 kubeadm.go:310] 
	I0731 21:52:21.156014 1157553 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:52:21.156022 1157553 kubeadm.go:310] 
	I0731 21:52:21.156114 1157553 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:52:21.156225 1157553 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:52:21.156313 1157553 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:52:21.156323 1157553 kubeadm.go:310] 
	I0731 21:52:21.156427 1157553 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:52:21.156551 1157553 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:52:21.156566 1157553 kubeadm.go:310] 
	I0731 21:52:21.156669 1157553 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nw95zl.z2s886xwdaoknoog \
	I0731 21:52:21.156798 1157553 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:52:21.156849 1157553 kubeadm.go:310] 	--control-plane 
	I0731 21:52:21.156882 1157553 kubeadm.go:310] 
	I0731 21:52:21.157026 1157553 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:52:21.157047 1157553 kubeadm.go:310] 
	I0731 21:52:21.157162 1157553 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nw95zl.z2s886xwdaoknoog \
	I0731 21:52:21.157322 1157553 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:52:21.157705 1157553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:52:21.157786 1157553 cni.go:84] Creating CNI manager for "kindnet"
	I0731 21:52:21.159568 1157553 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 21:52:16.353791 1156100 node_ready.go:49] node "custom-flannel-605794" has status "Ready":"True"
	I0731 21:52:16.353822 1156100 node_ready.go:38] duration metric: took 10.003701184s for node "custom-flannel-605794" to be "Ready" ...
	I0731 21:52:16.353835 1156100 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:52:16.368483 1156100 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-qfqfb" in "kube-system" namespace to be "Ready" ...
	I0731 21:52:18.376260 1156100 pod_ready.go:102] pod "coredns-7db6d8ff4d-qfqfb" in "kube-system" namespace has status "Ready":"False"
	I0731 21:52:20.376702 1156100 pod_ready.go:102] pod "coredns-7db6d8ff4d-qfqfb" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.598849720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462742598633806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=660316b2-bc01-41c1-b91e-724557522499 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.599372916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5290cea-6d63-4049-b678-bdbed7f25335 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.599425042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5290cea-6d63-4049-b678-bdbed7f25335 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.599630479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e6f4f2d56f3dae658f474871e27e3492d0ee93b9b2ee9da997ae1c01ff4f49e,PodSandboxId:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461401217175326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{io.kubernetes.container.hash: 57754f62,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999,PodSandboxId:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461398956608200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,},Annotations:map[string]string{io.kubernetes.container.hash: 6fadb29c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461391985210613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d,PodSandboxId:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722461391335491198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b
803-4125-980a-dc5501361d71,},Annotations:map[string]string{io.kubernetes.container.hash: 795c817e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461391303081367,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70
-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d,PodSandboxId:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461386567335023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5
f3169669c05,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82,PodSandboxId:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461386559042291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d6c7970ae2afdf9f14e0079e6f9c4666,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a,PodSandboxId:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461386560050639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d556
5a66,},Annotations:map[string]string{io.kubernetes.container.hash: bd69097d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329,PodSandboxId:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461386504341397,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d126
0f,},Annotations:map[string]string{io.kubernetes.container.hash: f7947ae5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5290cea-6d63-4049-b678-bdbed7f25335 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.640973283Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4bb44ec9-e912-48fb-8c0b-3b63ce132533 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.641061459Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4bb44ec9-e912-48fb-8c0b-3b63ce132533 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.642340075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a902932e-1325-427b-97bd-0d956d885908 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.642959277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462742642928959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a902932e-1325-427b-97bd-0d956d885908 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.643620721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cbbf5d9-4f13-4738-843a-ae73b65f23fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.643786031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cbbf5d9-4f13-4738-843a-ae73b65f23fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.644025259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e6f4f2d56f3dae658f474871e27e3492d0ee93b9b2ee9da997ae1c01ff4f49e,PodSandboxId:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461401217175326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{io.kubernetes.container.hash: 57754f62,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999,PodSandboxId:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461398956608200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,},Annotations:map[string]string{io.kubernetes.container.hash: 6fadb29c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461391985210613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d,PodSandboxId:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722461391335491198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b
803-4125-980a-dc5501361d71,},Annotations:map[string]string{io.kubernetes.container.hash: 795c817e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461391303081367,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70
-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d,PodSandboxId:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461386567335023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5
f3169669c05,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82,PodSandboxId:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461386559042291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d6c7970ae2afdf9f14e0079e6f9c4666,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a,PodSandboxId:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461386560050639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d556
5a66,},Annotations:map[string]string{io.kubernetes.container.hash: bd69097d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329,PodSandboxId:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461386504341397,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d126
0f,},Annotations:map[string]string{io.kubernetes.container.hash: f7947ae5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cbbf5d9-4f13-4738-843a-ae73b65f23fe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.682871027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3847ffa3-a6aa-4534-af58-d7b6aad30a74 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.682945882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3847ffa3-a6aa-4534-af58-d7b6aad30a74 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.684268898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5936ce95-11fb-4cb3-91bd-d753c1f27355 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.685083686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462742685051293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5936ce95-11fb-4cb3-91bd-d753c1f27355 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.685695213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9875d5e-0780-4f4a-a784-0af8440e2403 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.685756751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9875d5e-0780-4f4a-a784-0af8440e2403 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.685983537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e6f4f2d56f3dae658f474871e27e3492d0ee93b9b2ee9da997ae1c01ff4f49e,PodSandboxId:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461401217175326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{io.kubernetes.container.hash: 57754f62,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999,PodSandboxId:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461398956608200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,},Annotations:map[string]string{io.kubernetes.container.hash: 6fadb29c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461391985210613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d,PodSandboxId:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722461391335491198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b
803-4125-980a-dc5501361d71,},Annotations:map[string]string{io.kubernetes.container.hash: 795c817e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461391303081367,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70
-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d,PodSandboxId:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461386567335023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5
f3169669c05,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82,PodSandboxId:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461386559042291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d6c7970ae2afdf9f14e0079e6f9c4666,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a,PodSandboxId:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461386560050639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d556
5a66,},Annotations:map[string]string{io.kubernetes.container.hash: bd69097d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329,PodSandboxId:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461386504341397,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d126
0f,},Annotations:map[string]string{io.kubernetes.container.hash: f7947ae5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9875d5e-0780-4f4a-a784-0af8440e2403 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.725516453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b776524-edc2-4259-af1c-05410881c647 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.725591844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b776524-edc2-4259-af1c-05410881c647 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.726739574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a01a5ea0-038a-4696-9cd4-b554d3782cf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.727221158Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462742727195178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a01a5ea0-038a-4696-9cd4-b554d3782cf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.727909842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d8df17a-6adf-4d98-8e2b-b8845b67a644 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.727979437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d8df17a-6adf-4d98-8e2b-b8845b67a644 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:52:22 default-k8s-diff-port-755535 crio[729]: time="2024-07-31 21:52:22.728286736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e6f4f2d56f3dae658f474871e27e3492d0ee93b9b2ee9da997ae1c01ff4f49e,PodSandboxId:79c941e0df22bdc8f8dc8ef54a126edbc3030988b8d49c15525e4dfb9d7d8e77,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461401217175326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 873ec90f-0bdc-41a1-be49-45116eb0ccab,},Annotations:map[string]string{io.kubernetes.container.hash: 57754f62,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999,PodSandboxId:b248b79002e1e5e79698b129c054e651b3d3a7d3d7cd61ca357e40ef8210e7c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461398956608200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t9v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a16bc-571e-4d00-b12a-f50dc462f48f,},Annotations:map[string]string{io.kubernetes.container.hash: 6fadb29c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461391985210613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d,PodSandboxId:f834d6d69eecf805c50fbcf0246ba87c38db7b98524640b683dc312a6c67d30c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722461391335491198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqcmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476ef297-b
803-4125-980a-dc5501361d71,},Annotations:map[string]string{io.kubernetes.container.hash: 795c817e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247,PodSandboxId:ebf4bbfa181ae75a40e108da7aca359cf7060f3b4e0443350281cfb02a571a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461391303081367,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ff2805-3db9-4c39-9a70
-77073d33e3bd,},Annotations:map[string]string{io.kubernetes.container.hash: 73233b31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d,PodSandboxId:dfb00b692ae1eba269eb4fbce3e5ec4f44ebab8a4f50c50a3b9028c97dc4b60a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461386567335023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25920a19635748b7933f5
f3169669c05,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82,PodSandboxId:b248e01209ed33ae2f83bd45ae949efdd83adb539f6cc78b19d79f441aba4d74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461386559042291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d6c7970ae2afdf9f14e0079e6f9c4666,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a,PodSandboxId:151c36711165488c3d70a1a1738b1ce2137cf3c718ae61cf03307a75bf773ddf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461386560050639,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fde881bd185e21fa8b63992d556
5a66,},Annotations:map[string]string{io.kubernetes.container.hash: bd69097d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329,PodSandboxId:81db95a0092552df83842b4bc7197c4ee3e678236b6d9cd5d68e554cca2b8006,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461386504341397,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-755535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b38b6fd59462082d65a70ef38d126
0f,},Annotations:map[string]string{io.kubernetes.container.hash: f7947ae5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d8df17a-6adf-4d98-8e2b-b8845b67a644 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1e6f4f2d56f3d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   79c941e0df22b       busybox
	bcb32c8ad4c0b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   b248b79002e1e       coredns-7db6d8ff4d-t9v4z
	d88829a348f0a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       3                   ebf4bbfa181ae       storage-provisioner
	09a74d133e024       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      22 minutes ago      Running             kube-proxy                1                   f834d6d69eecf       kube-proxy-mqcmt
	f7bd90ab6a69f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       2                   ebf4bbfa181ae       storage-provisioner
	4c93a360c730d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      22 minutes ago      Running             kube-scheduler            1                   dfb00b692ae1e       kube-scheduler-default-k8s-diff-port-755535
	4cc8ee4ac01a6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   151c367111654       etcd-default-k8s-diff-port-755535
	cc7cd56cee77f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      22 minutes ago      Running             kube-controller-manager   1                   b248e01209ed3       kube-controller-manager-default-k8s-diff-port-755535
	147ee230f5cd2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      22 minutes ago      Running             kube-apiserver            1                   81db95a009255       kube-apiserver-default-k8s-diff-port-755535
	
	
	==> coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39377 - 13287 "HINFO IN 5308087396783994287.1259092129968555909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023772523s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-755535
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-755535
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=default-k8s-diff-port-755535
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_24_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:24:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-755535
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:52:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:50:44 +0000   Wed, 31 Jul 2024 21:24:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:50:44 +0000   Wed, 31 Jul 2024 21:24:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:50:44 +0000   Wed, 31 Jul 2024 21:24:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:50:44 +0000   Wed, 31 Jul 2024 21:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    default-k8s-diff-port-755535
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ccb94d8906748b98a1ce78ffba483b6
	  System UUID:                0ccb94d8-9067-48b9-8a1c-e78ffba483b6
	  Boot ID:                    fa0b0819-13dd-4372-ada4-4524a3fff1a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-7db6d8ff4d-t9v4z                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-default-k8s-diff-port-755535                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-755535             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-755535    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-mqcmt                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-default-k8s-diff-port-755535             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-968kv                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-755535 status is now: NodeReady
	  Normal  RegisteredNode           27m                node-controller  Node default-k8s-diff-port-755535 event: Registered Node default-k8s-diff-port-755535 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-755535 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-755535 event: Registered Node default-k8s-diff-port-755535 in Controller
	
	
	==> dmesg <==
	[Jul31 21:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048270] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037647] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.035729] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.959925] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.272038] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.074648] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053590] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.195633] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.162145] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.296834] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.459158] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.065379] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.950249] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +5.595615] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.913988] systemd-fstab-generator[1597]: Ignoring "noauto" option for root device
	[  +3.766726] kauditd_printk_skb: 67 callbacks suppressed
	[Jul31 21:30] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] <==
	{"level":"info","ts":"2024-07-31T21:49:45.754045Z","caller":"traceutil/trace.go:171","msg":"trace[1589291798] linearizableReadLoop","detail":"{readStateIndex:1841; appliedIndex:1840; }","duration":"142.975012ms","start":"2024-07-31T21:49:45.61105Z","end":"2024-07-31T21:49:45.754025Z","steps":["trace[1589291798] 'read index received'  (duration: 142.788548ms)","trace[1589291798] 'applied index is now lower than readState.Index'  (duration: 185.904µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T21:49:45.754261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.185623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T21:49:45.754362Z","caller":"traceutil/trace.go:171","msg":"trace[23345010] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:1561; }","duration":"143.340358ms","start":"2024-07-31T21:49:45.611013Z","end":"2024-07-31T21:49:45.754353Z","steps":["trace[23345010] 'agreement among raft nodes before linearized reading'  (duration: 143.19255ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:49:45.754272Z","caller":"traceutil/trace.go:171","msg":"trace[337942464] transaction","detail":"{read_only:false; response_revision:1561; number_of_response:1; }","duration":"324.733675ms","start":"2024-07-31T21:49:45.429526Z","end":"2024-07-31T21:49:45.75426Z","steps":["trace[337942464] 'process raft request'  (duration: 324.371599ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:49:45.755107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T21:49:45.429512Z","time spent":"325.530972ms","remote":"127.0.0.1:57084","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1558 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-31T21:49:46.559726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.028854ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13477462835021261342 > lease_revoke:<id:3b09910ab2f8a9d0>","response":"size:27"}
	{"level":"info","ts":"2024-07-31T21:49:46.559895Z","caller":"traceutil/trace.go:171","msg":"trace[994986236] linearizableReadLoop","detail":"{readStateIndex:1842; appliedIndex:1841; }","duration":"175.792567ms","start":"2024-07-31T21:49:46.384088Z","end":"2024-07-31T21:49:46.55988Z","steps":["trace[994986236] 'read index received'  (duration: 45.543187ms)","trace[994986236] 'applied index is now lower than readState.Index'  (duration: 130.247826ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T21:49:46.559987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.893459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:49:46.560027Z","caller":"traceutil/trace.go:171","msg":"trace[932871385] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1561; }","duration":"175.96757ms","start":"2024-07-31T21:49:46.384051Z","end":"2024-07-31T21:49:46.560018Z","steps":["trace[932871385] 'agreement among raft nodes before linearized reading'  (duration: 175.894393ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:49:48.650172Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1321}
	{"level":"info","ts":"2024-07-31T21:49:48.653623Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1321,"took":"3.105701ms","hash":3069200332,"current-db-size-bytes":2584576,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1495040,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-31T21:49:48.653747Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3069200332,"revision":1321,"compact-revision":1077}
	{"level":"info","ts":"2024-07-31T21:50:54.233648Z","caller":"traceutil/trace.go:171","msg":"trace[1963601338] linearizableReadLoop","detail":"{readStateIndex:1913; appliedIndex:1912; }","duration":"124.657518ms","start":"2024-07-31T21:50:54.108974Z","end":"2024-07-31T21:50:54.233631Z","steps":["trace[1963601338] 'read index received'  (duration: 124.580938ms)","trace[1963601338] 'applied index is now lower than readState.Index'  (duration: 76.035µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T21:50:54.233804Z","caller":"traceutil/trace.go:171","msg":"trace[1766222983] transaction","detail":"{read_only:false; response_revision:1617; number_of_response:1; }","duration":"129.80399ms","start":"2024-07-31T21:50:54.10399Z","end":"2024-07-31T21:50:54.233794Z","steps":["trace[1766222983] 'process raft request'  (duration: 129.544069ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:50:54.233985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.562052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T21:50:54.234076Z","caller":"traceutil/trace.go:171","msg":"trace[969056626] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:1617; }","duration":"100.691786ms","start":"2024-07-31T21:50:54.133375Z","end":"2024-07-31T21:50:54.234067Z","steps":["trace[969056626] 'agreement among raft nodes before linearized reading'  (duration: 100.538026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:50:54.234138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.15392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:50:54.234193Z","caller":"traceutil/trace.go:171","msg":"trace[1692272311] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1617; }","duration":"125.214067ms","start":"2024-07-31T21:50:54.108969Z","end":"2024-07-31T21:50:54.234183Z","steps":["trace[1692272311] 'agreement among raft nodes before linearized reading'  (duration: 125.135062ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:51:37.216873Z","caller":"traceutil/trace.go:171","msg":"trace[392722814] transaction","detail":"{read_only:false; response_revision:1653; number_of_response:1; }","duration":"290.817956ms","start":"2024-07-31T21:51:36.926033Z","end":"2024-07-31T21:51:37.216851Z","steps":["trace[392722814] 'process raft request'  (duration: 290.659754ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:51:37.224291Z","caller":"traceutil/trace.go:171","msg":"trace[1461829409] transaction","detail":"{read_only:false; response_revision:1654; number_of_response:1; }","duration":"161.682885ms","start":"2024-07-31T21:51:37.062585Z","end":"2024-07-31T21:51:37.224268Z","steps":["trace[1461829409] 'process raft request'  (duration: 161.58555ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:51:41.589131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.056266ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13477462835021261900 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.145\" mod_revision:1648 > success:<request_put:<key:\"/registry/masterleases/192.168.39.145\" value_size:68 lease:4254090798166486089 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.145\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T21:51:41.589342Z","caller":"traceutil/trace.go:171","msg":"trace[148285979] transaction","detail":"{read_only:false; response_revision:1657; number_of_response:1; }","duration":"165.796595ms","start":"2024-07-31T21:51:41.423528Z","end":"2024-07-31T21:51:41.589325Z","steps":["trace[148285979] 'process raft request'  (duration: 19.463689ms)","trace[148285979] 'compare'  (duration: 145.927948ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T21:51:41.850081Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.303489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:51:41.850148Z","caller":"traceutil/trace.go:171","msg":"trace[2083859753] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:1657; }","duration":"157.432896ms","start":"2024-07-31T21:51:41.692701Z","end":"2024-07-31T21:51:41.850134Z","steps":["trace[2083859753] 'count revisions from in-memory index tree'  (duration: 157.240422ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:52:04.800413Z","caller":"traceutil/trace.go:171","msg":"trace[60399319] transaction","detail":"{read_only:false; response_revision:1675; number_of_response:1; }","duration":"182.1153ms","start":"2024-07-31T21:52:04.618278Z","end":"2024-07-31T21:52:04.800393Z","steps":["trace[60399319] 'process raft request'  (duration: 182.01793ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:52:23 up 23 min,  0 users,  load average: 0.12, 0.09, 0.08
	Linux default-k8s-diff-port-755535 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] <==
	I0731 21:45:50.982917       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:47:50.981467       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:47:50.981519       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:47:50.981528       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:47:50.983765       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:47:50.983911       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:47:50.983956       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:49:49.985258       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:49:49.985370       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 21:49:50.986091       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:49:50.986385       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:49:50.986518       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:49:50.986312       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:49:50.986828       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:49:50.988603       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:50:50.986886       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:50:50.987118       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:50:50.987190       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:50:50.989119       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:50:50.989205       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:50:50.989212       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] <==
	I0731 21:46:33.238456       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:47:02.739993       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:47:03.245821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:47:32.744039       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:47:33.253257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:48:02.748286       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:48:03.260103       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:48:32.752324       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:48:33.267505       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:49:02.761164       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:49:03.274163       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:49:32.765383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:49:33.281254       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:50:02.770036       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:50:03.287972       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:50:32.775777       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:50:33.298085       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:51:02.780884       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:51:03.305447       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:51:22.938202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="150.18µs"
	E0731 21:51:32.786283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:51:33.314266       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:51:37.223393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="111.077µs"
	E0731 21:52:02.791490       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:52:03.321309       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] <==
	I0731 21:29:51.501362       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:29:51.512418       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	I0731 21:29:51.562418       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:29:51.564013       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:29:51.564097       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:29:51.568735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:29:51.569034       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:29:51.569083       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:29:51.573209       1 config.go:319] "Starting node config controller"
	I0731 21:29:51.573258       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:29:51.573998       1 config.go:192] "Starting service config controller"
	I0731 21:29:51.574072       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:29:51.574156       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:29:51.574192       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:29:51.673806       1 shared_informer.go:320] Caches are synced for node config
	I0731 21:29:51.674777       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:29:51.677772       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] <==
	I0731 21:29:47.600747       1 serving.go:380] Generated self-signed cert in-memory
	I0731 21:29:50.004743       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:29:50.004775       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:29:50.015322       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:29:50.015403       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0731 21:29:50.015411       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0731 21:29:50.015424       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:29:50.025140       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:29:50.025178       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:29:50.025196       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0731 21:29:50.025201       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 21:29:50.115617       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0731 21:29:50.126183       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:29:50.126184       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:49:45 default-k8s-diff-port-755535 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:50:00 default-k8s-diff-port-755535 kubelet[937]: E0731 21:50:00.918213     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:50:14 default-k8s-diff-port-755535 kubelet[937]: E0731 21:50:14.918813     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:50:27 default-k8s-diff-port-755535 kubelet[937]: E0731 21:50:27.918569     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:50:41 default-k8s-diff-port-755535 kubelet[937]: E0731 21:50:41.920445     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:50:45 default-k8s-diff-port-755535 kubelet[937]: E0731 21:50:45.936466     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:50:45 default-k8s-diff-port-755535 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:50:45 default-k8s-diff-port-755535 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:50:45 default-k8s-diff-port-755535 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:50:45 default-k8s-diff-port-755535 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:50:54 default-k8s-diff-port-755535 kubelet[937]: E0731 21:50:54.918456     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:51:08 default-k8s-diff-port-755535 kubelet[937]: E0731 21:51:08.947451     937 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 21:51:08 default-k8s-diff-port-755535 kubelet[937]: E0731 21:51:08.947500     937 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 21:51:08 default-k8s-diff-port-755535 kubelet[937]: E0731 21:51:08.947637     937 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnf6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-968kv_kube-system(c144d022-c820-43eb-bed1-80f2dca27ac0): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 31 21:51:08 default-k8s-diff-port-755535 kubelet[937]: E0731 21:51:08.948043     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:51:22 default-k8s-diff-port-755535 kubelet[937]: E0731 21:51:22.918815     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:51:36 default-k8s-diff-port-755535 kubelet[937]: E0731 21:51:36.921100     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:51:45 default-k8s-diff-port-755535 kubelet[937]: E0731 21:51:45.937028     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:51:45 default-k8s-diff-port-755535 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:51:45 default-k8s-diff-port-755535 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:51:45 default-k8s-diff-port-755535 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:51:45 default-k8s-diff-port-755535 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:51:51 default-k8s-diff-port-755535 kubelet[937]: E0731 21:51:51.920750     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:52:03 default-k8s-diff-port-755535 kubelet[937]: E0731 21:52:03.919445     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	Jul 31 21:52:14 default-k8s-diff-port-755535 kubelet[937]: E0731 21:52:14.919514     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-968kv" podUID="c144d022-c820-43eb-bed1-80f2dca27ac0"
	
	
	==> storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] <==
	I0731 21:29:52.134752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:29:52.149066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:29:52.149191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:30:09.601573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:30:09.602006       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d151d35-bc05-48aa-ba90-b060f018e0df", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-755535_6c905fcc-95e2-4d3f-813c-dfa6507f7faa became leader
	I0731 21:30:09.602239       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-755535_6c905fcc-95e2-4d3f-813c-dfa6507f7faa!
	I0731 21:30:09.702759       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-755535_6c905fcc-95e2-4d3f-813c-dfa6507f7faa!
	
	
	==> storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] <==
	I0731 21:29:51.400055       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 21:29:51.402208       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-755535 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-968kv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-755535 describe pod metrics-server-569cc877fc-968kv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-755535 describe pod metrics-server-569cc877fc-968kv: exit status 1 (78.158634ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-968kv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-755535 describe pod metrics-server-569cc877fc-968kv: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (468.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-563652 -n embed-certs-563652
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:51:08.293648473 +0000 UTC m=+6109.452079597
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-563652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-563652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.774µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-563652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-563652 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-563652 logs -n 25: (1.203545082s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-018891 --memory=2200                     | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:34 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-755535  | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC | 31 Jul 24 21:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-563652                 | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275462             | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-755535       | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC | 31 Jul 24 21:34 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	| start   | -p newest-cni-308216 --memory=2200 --alsologtostderr   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-308216             | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-308216                                   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-308216                  | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-308216 --memory=2200 --alsologtostderr   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-308216 image list                           | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-308216                                   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-308216                                   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-308216                                   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	| delete  | -p no-preload-018891                                   | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	| delete  | -p newest-cni-308216                                   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	| start   | -p auto-605794 --memory=3072                           | auto-605794                  | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p calico-605794 --memory=3072                         | calico-605794                | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-605794 pgrep -a                                | auto-605794                  | jenkins | v1.33.1 | 31 Jul 24 21:50 UTC | 31 Jul 24 21:50 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:49:59
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:49:59.324231 1155232 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:49:59.324406 1155232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:49:59.324421 1155232 out.go:304] Setting ErrFile to fd 2...
	I0731 21:49:59.324428 1155232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:49:59.324742 1155232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:49:59.325597 1155232 out.go:298] Setting JSON to false
	I0731 21:49:59.326752 1155232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19950,"bootTime":1722442649,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:49:59.326824 1155232 start.go:139] virtualization: kvm guest
	I0731 21:49:59.329376 1155232 out.go:177] * [calico-605794] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:49:59.331092 1155232 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:49:59.331107 1155232 notify.go:220] Checking for updates...
	I0731 21:49:59.334189 1155232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:49:59.335813 1155232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:49:59.337296 1155232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:49:59.338815 1155232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:49:59.340346 1155232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:49:59.342182 1155232 config.go:182] Loaded profile config "auto-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:49:59.342309 1155232 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:49:59.342424 1155232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:49:59.342574 1155232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:49:59.378952 1155232 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:49:59.380316 1155232 start.go:297] selected driver: kvm2
	I0731 21:49:59.380337 1155232 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:49:59.380353 1155232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:49:59.381436 1155232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:49:59.381530 1155232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:49:59.398764 1155232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:49:59.398835 1155232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:49:59.399190 1155232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:49:59.399234 1155232 cni.go:84] Creating CNI manager for "calico"
	I0731 21:49:59.399242 1155232 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0731 21:49:59.399327 1155232 start.go:340] cluster config:
	{Name:calico-605794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:49:59.399460 1155232 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:49:59.401431 1155232 out.go:177] * Starting "calico-605794" primary control-plane node in "calico-605794" cluster
	I0731 21:49:59.057212 1155156 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 21:49:59.057404 1155156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:59.057475 1155156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:59.073179 1155156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42599
	I0731 21:49:59.073716 1155156 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:59.074313 1155156 main.go:141] libmachine: Using API Version  1
	I0731 21:49:59.074335 1155156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:59.074679 1155156 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:59.074903 1155156 main.go:141] libmachine: (auto-605794) Calling .GetMachineName
	I0731 21:49:59.075089 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:49:59.075233 1155156 start.go:159] libmachine.API.Create for "auto-605794" (driver="kvm2")
	I0731 21:49:59.075272 1155156 client.go:168] LocalClient.Create starting
	I0731 21:49:59.075313 1155156 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 21:49:59.075356 1155156 main.go:141] libmachine: Decoding PEM data...
	I0731 21:49:59.075377 1155156 main.go:141] libmachine: Parsing certificate...
	I0731 21:49:59.075447 1155156 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 21:49:59.075485 1155156 main.go:141] libmachine: Decoding PEM data...
	I0731 21:49:59.075503 1155156 main.go:141] libmachine: Parsing certificate...
	I0731 21:49:59.075529 1155156 main.go:141] libmachine: Running pre-create checks...
	I0731 21:49:59.075544 1155156 main.go:141] libmachine: (auto-605794) Calling .PreCreateCheck
	I0731 21:49:59.075948 1155156 main.go:141] libmachine: (auto-605794) Calling .GetConfigRaw
	I0731 21:49:59.076406 1155156 main.go:141] libmachine: Creating machine...
	I0731 21:49:59.076423 1155156 main.go:141] libmachine: (auto-605794) Calling .Create
	I0731 21:49:59.076576 1155156 main.go:141] libmachine: (auto-605794) Creating KVM machine...
	I0731 21:49:59.078109 1155156 main.go:141] libmachine: (auto-605794) DBG | found existing default KVM network
	I0731 21:49:59.079223 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:49:59.079050 1155189 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b1:2b:9e} reservation:<nil>}
	I0731 21:49:59.079970 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:49:59.079877 1155189 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6e:b2:f0} reservation:<nil>}
	I0731 21:49:59.081001 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:49:59.080906 1155189 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027d0b0}
	I0731 21:49:59.081058 1155156 main.go:141] libmachine: (auto-605794) DBG | created network xml: 
	I0731 21:49:59.081081 1155156 main.go:141] libmachine: (auto-605794) DBG | <network>
	I0731 21:49:59.081093 1155156 main.go:141] libmachine: (auto-605794) DBG |   <name>mk-auto-605794</name>
	I0731 21:49:59.081104 1155156 main.go:141] libmachine: (auto-605794) DBG |   <dns enable='no'/>
	I0731 21:49:59.081115 1155156 main.go:141] libmachine: (auto-605794) DBG |   
	I0731 21:49:59.081128 1155156 main.go:141] libmachine: (auto-605794) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0731 21:49:59.081142 1155156 main.go:141] libmachine: (auto-605794) DBG |     <dhcp>
	I0731 21:49:59.081163 1155156 main.go:141] libmachine: (auto-605794) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0731 21:49:59.081175 1155156 main.go:141] libmachine: (auto-605794) DBG |     </dhcp>
	I0731 21:49:59.081183 1155156 main.go:141] libmachine: (auto-605794) DBG |   </ip>
	I0731 21:49:59.081195 1155156 main.go:141] libmachine: (auto-605794) DBG |   
	I0731 21:49:59.081206 1155156 main.go:141] libmachine: (auto-605794) DBG | </network>
	I0731 21:49:59.081220 1155156 main.go:141] libmachine: (auto-605794) DBG | 
	I0731 21:49:59.086895 1155156 main.go:141] libmachine: (auto-605794) DBG | trying to create private KVM network mk-auto-605794 192.168.61.0/24...
	I0731 21:49:59.166552 1155156 main.go:141] libmachine: (auto-605794) DBG | private KVM network mk-auto-605794 192.168.61.0/24 created
	I0731 21:49:59.166586 1155156 main.go:141] libmachine: (auto-605794) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794 ...
	I0731 21:49:59.166604 1155156 main.go:141] libmachine: (auto-605794) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 21:49:59.166666 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:49:59.166551 1155189 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:49:59.166858 1155156 main.go:141] libmachine: (auto-605794) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 21:49:59.486107 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:49:59.485959 1155189 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa...
	I0731 21:49:59.890362 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:49:59.890205 1155189 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/auto-605794.rawdisk...
	I0731 21:49:59.890396 1155156 main.go:141] libmachine: (auto-605794) DBG | Writing magic tar header
	I0731 21:49:59.890408 1155156 main.go:141] libmachine: (auto-605794) DBG | Writing SSH key tar header
	I0731 21:49:59.890441 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:49:59.890324 1155189 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794 ...
	I0731 21:49:59.890462 1155156 main.go:141] libmachine: (auto-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794
	I0731 21:49:59.890474 1155156 main.go:141] libmachine: (auto-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 21:49:59.890487 1155156 main.go:141] libmachine: (auto-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:49:59.890508 1155156 main.go:141] libmachine: (auto-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794 (perms=drwx------)
	I0731 21:49:59.890522 1155156 main.go:141] libmachine: (auto-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 21:49:59.890535 1155156 main.go:141] libmachine: (auto-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 21:49:59.890549 1155156 main.go:141] libmachine: (auto-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 21:49:59.890558 1155156 main.go:141] libmachine: (auto-605794) DBG | Checking permissions on dir: /home/jenkins
	I0731 21:49:59.890564 1155156 main.go:141] libmachine: (auto-605794) DBG | Checking permissions on dir: /home
	I0731 21:49:59.890569 1155156 main.go:141] libmachine: (auto-605794) DBG | Skipping /home - not owner
	I0731 21:49:59.890587 1155156 main.go:141] libmachine: (auto-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 21:49:59.890601 1155156 main.go:141] libmachine: (auto-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 21:49:59.890614 1155156 main.go:141] libmachine: (auto-605794) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 21:49:59.890626 1155156 main.go:141] libmachine: (auto-605794) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 21:49:59.890639 1155156 main.go:141] libmachine: (auto-605794) Creating domain...
	I0731 21:49:59.891817 1155156 main.go:141] libmachine: (auto-605794) define libvirt domain using xml: 
	I0731 21:49:59.891842 1155156 main.go:141] libmachine: (auto-605794) <domain type='kvm'>
	I0731 21:49:59.891856 1155156 main.go:141] libmachine: (auto-605794)   <name>auto-605794</name>
	I0731 21:49:59.891864 1155156 main.go:141] libmachine: (auto-605794)   <memory unit='MiB'>3072</memory>
	I0731 21:49:59.891871 1155156 main.go:141] libmachine: (auto-605794)   <vcpu>2</vcpu>
	I0731 21:49:59.891876 1155156 main.go:141] libmachine: (auto-605794)   <features>
	I0731 21:49:59.891884 1155156 main.go:141] libmachine: (auto-605794)     <acpi/>
	I0731 21:49:59.891893 1155156 main.go:141] libmachine: (auto-605794)     <apic/>
	I0731 21:49:59.891900 1155156 main.go:141] libmachine: (auto-605794)     <pae/>
	I0731 21:49:59.891905 1155156 main.go:141] libmachine: (auto-605794)     
	I0731 21:49:59.891934 1155156 main.go:141] libmachine: (auto-605794)   </features>
	I0731 21:49:59.891959 1155156 main.go:141] libmachine: (auto-605794)   <cpu mode='host-passthrough'>
	I0731 21:49:59.891969 1155156 main.go:141] libmachine: (auto-605794)   
	I0731 21:49:59.891984 1155156 main.go:141] libmachine: (auto-605794)   </cpu>
	I0731 21:49:59.892007 1155156 main.go:141] libmachine: (auto-605794)   <os>
	I0731 21:49:59.892017 1155156 main.go:141] libmachine: (auto-605794)     <type>hvm</type>
	I0731 21:49:59.892022 1155156 main.go:141] libmachine: (auto-605794)     <boot dev='cdrom'/>
	I0731 21:49:59.892027 1155156 main.go:141] libmachine: (auto-605794)     <boot dev='hd'/>
	I0731 21:49:59.892032 1155156 main.go:141] libmachine: (auto-605794)     <bootmenu enable='no'/>
	I0731 21:49:59.892038 1155156 main.go:141] libmachine: (auto-605794)   </os>
	I0731 21:49:59.892050 1155156 main.go:141] libmachine: (auto-605794)   <devices>
	I0731 21:49:59.892062 1155156 main.go:141] libmachine: (auto-605794)     <disk type='file' device='cdrom'>
	I0731 21:49:59.892077 1155156 main.go:141] libmachine: (auto-605794)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/boot2docker.iso'/>
	I0731 21:49:59.892107 1155156 main.go:141] libmachine: (auto-605794)       <target dev='hdc' bus='scsi'/>
	I0731 21:49:59.892122 1155156 main.go:141] libmachine: (auto-605794)       <readonly/>
	I0731 21:49:59.892131 1155156 main.go:141] libmachine: (auto-605794)     </disk>
	I0731 21:49:59.892141 1155156 main.go:141] libmachine: (auto-605794)     <disk type='file' device='disk'>
	I0731 21:49:59.892148 1155156 main.go:141] libmachine: (auto-605794)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 21:49:59.892160 1155156 main.go:141] libmachine: (auto-605794)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/auto-605794.rawdisk'/>
	I0731 21:49:59.892168 1155156 main.go:141] libmachine: (auto-605794)       <target dev='hda' bus='virtio'/>
	I0731 21:49:59.892172 1155156 main.go:141] libmachine: (auto-605794)     </disk>
	I0731 21:49:59.892180 1155156 main.go:141] libmachine: (auto-605794)     <interface type='network'>
	I0731 21:49:59.892185 1155156 main.go:141] libmachine: (auto-605794)       <source network='mk-auto-605794'/>
	I0731 21:49:59.892204 1155156 main.go:141] libmachine: (auto-605794)       <model type='virtio'/>
	I0731 21:49:59.892218 1155156 main.go:141] libmachine: (auto-605794)     </interface>
	I0731 21:49:59.892225 1155156 main.go:141] libmachine: (auto-605794)     <interface type='network'>
	I0731 21:49:59.892232 1155156 main.go:141] libmachine: (auto-605794)       <source network='default'/>
	I0731 21:49:59.892238 1155156 main.go:141] libmachine: (auto-605794)       <model type='virtio'/>
	I0731 21:49:59.892245 1155156 main.go:141] libmachine: (auto-605794)     </interface>
	I0731 21:49:59.892253 1155156 main.go:141] libmachine: (auto-605794)     <serial type='pty'>
	I0731 21:49:59.892260 1155156 main.go:141] libmachine: (auto-605794)       <target port='0'/>
	I0731 21:49:59.892265 1155156 main.go:141] libmachine: (auto-605794)     </serial>
	I0731 21:49:59.892271 1155156 main.go:141] libmachine: (auto-605794)     <console type='pty'>
	I0731 21:49:59.892278 1155156 main.go:141] libmachine: (auto-605794)       <target type='serial' port='0'/>
	I0731 21:49:59.892284 1155156 main.go:141] libmachine: (auto-605794)     </console>
	I0731 21:49:59.892290 1155156 main.go:141] libmachine: (auto-605794)     <rng model='virtio'>
	I0731 21:49:59.892301 1155156 main.go:141] libmachine: (auto-605794)       <backend model='random'>/dev/random</backend>
	I0731 21:49:59.892308 1155156 main.go:141] libmachine: (auto-605794)     </rng>
	I0731 21:49:59.892313 1155156 main.go:141] libmachine: (auto-605794)     
	I0731 21:49:59.892321 1155156 main.go:141] libmachine: (auto-605794)     
	I0731 21:49:59.892325 1155156 main.go:141] libmachine: (auto-605794)   </devices>
	I0731 21:49:59.892331 1155156 main.go:141] libmachine: (auto-605794) </domain>
	I0731 21:49:59.892336 1155156 main.go:141] libmachine: (auto-605794) 
	I0731 21:49:59.896681 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:b1:46:55 in network default
	I0731 21:49:59.897237 1155156 main.go:141] libmachine: (auto-605794) Ensuring networks are active...
	I0731 21:49:59.897264 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:49:59.898019 1155156 main.go:141] libmachine: (auto-605794) Ensuring network default is active
	I0731 21:49:59.898281 1155156 main.go:141] libmachine: (auto-605794) Ensuring network mk-auto-605794 is active
	I0731 21:49:59.898802 1155156 main.go:141] libmachine: (auto-605794) Getting domain xml...
	I0731 21:49:59.899565 1155156 main.go:141] libmachine: (auto-605794) Creating domain...
	I0731 21:50:01.152554 1155156 main.go:141] libmachine: (auto-605794) Waiting to get IP...
	I0731 21:50:01.153564 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:01.154126 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:01.154157 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:01.154093 1155189 retry.go:31] will retry after 190.162743ms: waiting for machine to come up
	I0731 21:50:01.345672 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:01.346167 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:01.346215 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:01.346120 1155189 retry.go:31] will retry after 338.886397ms: waiting for machine to come up
	I0731 21:50:01.686939 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:01.687434 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:01.687463 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:01.687380 1155189 retry.go:31] will retry after 478.048962ms: waiting for machine to come up
	I0731 21:50:02.167306 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:02.167971 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:02.168002 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:02.167908 1155189 retry.go:31] will retry after 548.387748ms: waiting for machine to come up
	I0731 21:50:02.718332 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:02.718784 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:02.718809 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:02.718750 1155189 retry.go:31] will retry after 677.426175ms: waiting for machine to come up
	I0731 21:50:03.398197 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:03.398700 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:03.398730 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:03.398648 1155189 retry.go:31] will retry after 926.770077ms: waiting for machine to come up
	I0731 21:49:59.402813 1155232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:49:59.402858 1155232 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:49:59.402871 1155232 cache.go:56] Caching tarball of preloaded images
	I0731 21:49:59.402976 1155232 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:49:59.402992 1155232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:49:59.403140 1155232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/config.json ...
	I0731 21:49:59.403164 1155232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/config.json: {Name:mk121f6a981ada411f2749b1f196341c6abc336a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:49:59.403342 1155232 start.go:360] acquireMachinesLock for calico-605794: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:50:04.327402 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:04.327855 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:04.327886 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:04.327806 1155189 retry.go:31] will retry after 1.036035721s: waiting for machine to come up
	I0731 21:50:05.365660 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:05.366189 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:05.366217 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:05.366126 1155189 retry.go:31] will retry after 1.194919051s: waiting for machine to come up
	I0731 21:50:06.562374 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:06.562958 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:06.562980 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:06.562923 1155189 retry.go:31] will retry after 1.578211049s: waiting for machine to come up
	I0731 21:50:08.143798 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:08.144383 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:08.144414 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:08.144319 1155189 retry.go:31] will retry after 1.903617125s: waiting for machine to come up
	I0731 21:50:10.049156 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:10.049568 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:10.049597 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:10.049510 1155189 retry.go:31] will retry after 2.208346984s: waiting for machine to come up
	I0731 21:50:12.260410 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:12.260926 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:12.260951 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:12.260862 1155189 retry.go:31] will retry after 2.912775933s: waiting for machine to come up
	I0731 21:50:15.175618 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:15.176110 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find current IP address of domain auto-605794 in network mk-auto-605794
	I0731 21:50:15.176141 1155156 main.go:141] libmachine: (auto-605794) DBG | I0731 21:50:15.176065 1155189 retry.go:31] will retry after 3.870083309s: waiting for machine to come up
	I0731 21:50:23.548763 1155232 start.go:364] duration metric: took 24.145380175s to acquireMachinesLock for "calico-605794"
	I0731 21:50:23.548844 1155232 start.go:93] Provisioning new machine with config: &{Name:calico-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:calico-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:50:23.548971 1155232 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 21:50:19.048175 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:19.048737 1155156 main.go:141] libmachine: (auto-605794) Found IP for machine: 192.168.61.197
	I0731 21:50:19.048759 1155156 main.go:141] libmachine: (auto-605794) Reserving static IP address...
	I0731 21:50:19.048772 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has current primary IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:19.049126 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find host DHCP lease matching {name: "auto-605794", mac: "52:54:00:8f:e7:91", ip: "192.168.61.197"} in network mk-auto-605794
	I0731 21:50:19.132949 1155156 main.go:141] libmachine: (auto-605794) DBG | Getting to WaitForSSH function...
	I0731 21:50:19.132988 1155156 main.go:141] libmachine: (auto-605794) Reserved static IP address: 192.168.61.197
	I0731 21:50:19.133004 1155156 main.go:141] libmachine: (auto-605794) Waiting for SSH to be available...
	I0731 21:50:19.135770 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:19.136257 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794
	I0731 21:50:19.136285 1155156 main.go:141] libmachine: (auto-605794) DBG | unable to find defined IP address of network mk-auto-605794 interface with MAC address 52:54:00:8f:e7:91
	I0731 21:50:19.136479 1155156 main.go:141] libmachine: (auto-605794) DBG | Using SSH client type: external
	I0731 21:50:19.136509 1155156 main.go:141] libmachine: (auto-605794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa (-rw-------)
	I0731 21:50:19.136550 1155156 main.go:141] libmachine: (auto-605794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:50:19.136568 1155156 main.go:141] libmachine: (auto-605794) DBG | About to run SSH command:
	I0731 21:50:19.136582 1155156 main.go:141] libmachine: (auto-605794) DBG | exit 0
	I0731 21:50:19.141740 1155156 main.go:141] libmachine: (auto-605794) DBG | SSH cmd err, output: exit status 255: 
	I0731 21:50:19.141771 1155156 main.go:141] libmachine: (auto-605794) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 21:50:19.141783 1155156 main.go:141] libmachine: (auto-605794) DBG | command : exit 0
	I0731 21:50:19.141789 1155156 main.go:141] libmachine: (auto-605794) DBG | err     : exit status 255
	I0731 21:50:19.141828 1155156 main.go:141] libmachine: (auto-605794) DBG | output  : 
	I0731 21:50:22.142036 1155156 main.go:141] libmachine: (auto-605794) DBG | Getting to WaitForSSH function...
	I0731 21:50:22.144790 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.145218 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:22.145247 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.145353 1155156 main.go:141] libmachine: (auto-605794) DBG | Using SSH client type: external
	I0731 21:50:22.145381 1155156 main.go:141] libmachine: (auto-605794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa (-rw-------)
	I0731 21:50:22.145444 1155156 main.go:141] libmachine: (auto-605794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:50:22.145465 1155156 main.go:141] libmachine: (auto-605794) DBG | About to run SSH command:
	I0731 21:50:22.145480 1155156 main.go:141] libmachine: (auto-605794) DBG | exit 0
	I0731 21:50:22.272230 1155156 main.go:141] libmachine: (auto-605794) DBG | SSH cmd err, output: <nil>: 
	I0731 21:50:22.272488 1155156 main.go:141] libmachine: (auto-605794) KVM machine creation complete!
	I0731 21:50:22.272865 1155156 main.go:141] libmachine: (auto-605794) Calling .GetConfigRaw
	I0731 21:50:22.273569 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:22.273766 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:22.274007 1155156 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 21:50:22.274024 1155156 main.go:141] libmachine: (auto-605794) Calling .GetState
	I0731 21:50:22.275453 1155156 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 21:50:22.275471 1155156 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 21:50:22.275482 1155156 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 21:50:22.275488 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:22.277700 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.278049 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:22.278077 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.278226 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:22.278400 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.278555 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.278692 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:22.278825 1155156 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:22.279085 1155156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0731 21:50:22.279099 1155156 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 21:50:22.383512 1155156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:50:22.383541 1155156 main.go:141] libmachine: Detecting the provisioner...
	I0731 21:50:22.383549 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:22.386237 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.386538 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:22.386569 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.386746 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:22.386944 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.387129 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.387295 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:22.387464 1155156 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:22.387656 1155156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0731 21:50:22.387670 1155156 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 21:50:22.492484 1155156 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 21:50:22.492619 1155156 main.go:141] libmachine: found compatible host: buildroot
	I0731 21:50:22.492631 1155156 main.go:141] libmachine: Provisioning with buildroot...
	I0731 21:50:22.492644 1155156 main.go:141] libmachine: (auto-605794) Calling .GetMachineName
	I0731 21:50:22.492963 1155156 buildroot.go:166] provisioning hostname "auto-605794"
	I0731 21:50:22.492991 1155156 main.go:141] libmachine: (auto-605794) Calling .GetMachineName
	I0731 21:50:22.493157 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:22.495786 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.496114 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:22.496140 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.496299 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:22.496539 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.496736 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.496916 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:22.497081 1155156 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:22.497283 1155156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0731 21:50:22.497301 1155156 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-605794 && echo "auto-605794" | sudo tee /etc/hostname
	I0731 21:50:22.613925 1155156 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-605794
	
	I0731 21:50:22.613960 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:22.616677 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.617048 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:22.617080 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.617242 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:22.617450 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.617700 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.617861 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:22.618026 1155156 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:22.618220 1155156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0731 21:50:22.618236 1155156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-605794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-605794/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-605794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:50:22.732757 1155156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:50:22.732791 1155156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:50:22.732854 1155156 buildroot.go:174] setting up certificates
	I0731 21:50:22.732873 1155156 provision.go:84] configureAuth start
	I0731 21:50:22.732887 1155156 main.go:141] libmachine: (auto-605794) Calling .GetMachineName
	I0731 21:50:22.733266 1155156 main.go:141] libmachine: (auto-605794) Calling .GetIP
	I0731 21:50:22.735815 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.736131 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:22.736157 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.736331 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:22.738515 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.738812 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:22.738831 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.738961 1155156 provision.go:143] copyHostCerts
	I0731 21:50:22.739026 1155156 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:50:22.739040 1155156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:50:22.739139 1155156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:50:22.739244 1155156 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:50:22.739253 1155156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:50:22.739280 1155156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:50:22.739331 1155156 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:50:22.739338 1155156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:50:22.739359 1155156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:50:22.739405 1155156 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.auto-605794 san=[127.0.0.1 192.168.61.197 auto-605794 localhost minikube]
	I0731 21:50:22.875217 1155156 provision.go:177] copyRemoteCerts
	I0731 21:50:22.875285 1155156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:50:22.875313 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:22.877980 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.878309 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:22.878344 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:22.878511 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:22.878747 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:22.878900 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:22.879017 1155156 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa Username:docker}
	I0731 21:50:22.967289 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:50:22.990861 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0731 21:50:23.014670 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:50:23.038624 1155156 provision.go:87] duration metric: took 305.735325ms to configureAuth
	I0731 21:50:23.038657 1155156 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:50:23.038833 1155156 config.go:182] Loaded profile config "auto-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:50:23.038953 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:23.042021 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.042393 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:23.042424 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.042636 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:23.042820 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:23.043073 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:23.043231 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:23.043376 1155156 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:23.043568 1155156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0731 21:50:23.043583 1155156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:50:23.306652 1155156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:50:23.306679 1155156 main.go:141] libmachine: Checking connection to Docker...
	I0731 21:50:23.306687 1155156 main.go:141] libmachine: (auto-605794) Calling .GetURL
	I0731 21:50:23.307977 1155156 main.go:141] libmachine: (auto-605794) DBG | Using libvirt version 6000000
	I0731 21:50:23.310459 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.310892 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:23.310921 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.311127 1155156 main.go:141] libmachine: Docker is up and running!
	I0731 21:50:23.311153 1155156 main.go:141] libmachine: Reticulating splines...
	I0731 21:50:23.311169 1155156 client.go:171] duration metric: took 24.23587779s to LocalClient.Create
	I0731 21:50:23.311202 1155156 start.go:167] duration metric: took 24.235970079s to libmachine.API.Create "auto-605794"
	I0731 21:50:23.311210 1155156 start.go:293] postStartSetup for "auto-605794" (driver="kvm2")
	I0731 21:50:23.311223 1155156 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:50:23.311259 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:23.311569 1155156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:50:23.311602 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:23.313999 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.314328 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:23.314359 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.314517 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:23.314721 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:23.314908 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:23.315048 1155156 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa Username:docker}
	I0731 21:50:23.398727 1155156 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:50:23.403014 1155156 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:50:23.403045 1155156 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:50:23.403124 1155156 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:50:23.403225 1155156 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:50:23.403348 1155156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:50:23.412946 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:50:23.436532 1155156 start.go:296] duration metric: took 125.297261ms for postStartSetup
	I0731 21:50:23.436595 1155156 main.go:141] libmachine: (auto-605794) Calling .GetConfigRaw
	I0731 21:50:23.437203 1155156 main.go:141] libmachine: (auto-605794) Calling .GetIP
	I0731 21:50:23.439966 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.440400 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:23.440429 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.440773 1155156 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/config.json ...
	I0731 21:50:23.440984 1155156 start.go:128] duration metric: took 24.386043597s to createHost
	I0731 21:50:23.441016 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:23.443346 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.443712 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:23.443740 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.443914 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:23.444147 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:23.444311 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:23.444487 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:23.444715 1155156 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:23.444887 1155156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0731 21:50:23.444898 1155156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:50:23.548599 1155156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722462623.525551098
	
	I0731 21:50:23.548626 1155156 fix.go:216] guest clock: 1722462623.525551098
	I0731 21:50:23.548634 1155156 fix.go:229] Guest: 2024-07-31 21:50:23.525551098 +0000 UTC Remote: 2024-07-31 21:50:23.440998242 +0000 UTC m=+24.905191725 (delta=84.552856ms)
	I0731 21:50:23.548655 1155156 fix.go:200] guest clock delta is within tolerance: 84.552856ms
	I0731 21:50:23.548660 1155156 start.go:83] releasing machines lock for "auto-605794", held for 24.4938385s
	I0731 21:50:23.548686 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:23.548982 1155156 main.go:141] libmachine: (auto-605794) Calling .GetIP
	I0731 21:50:23.551837 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.552231 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:23.552259 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.552457 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:23.553045 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:23.553260 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:23.553369 1155156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:50:23.553442 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:23.553540 1155156 ssh_runner.go:195] Run: cat /version.json
	I0731 21:50:23.553604 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:23.556494 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.556789 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.556831 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:23.556851 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.556996 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:23.557242 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:23.557271 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:23.557292 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:23.557425 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:23.557505 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:23.557577 1155156 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa Username:docker}
	I0731 21:50:23.557698 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:23.557836 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:23.558002 1155156 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa Username:docker}
	I0731 21:50:23.551198 1155232 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 21:50:23.551411 1155232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:50:23.551450 1155232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:50:23.569436 1155232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42621
	I0731 21:50:23.570073 1155232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:50:23.570710 1155232 main.go:141] libmachine: Using API Version  1
	I0731 21:50:23.570732 1155232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:50:23.571140 1155232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:50:23.571343 1155232 main.go:141] libmachine: (calico-605794) Calling .GetMachineName
	I0731 21:50:23.571524 1155232 main.go:141] libmachine: (calico-605794) Calling .DriverName
	I0731 21:50:23.571694 1155232 start.go:159] libmachine.API.Create for "calico-605794" (driver="kvm2")
	I0731 21:50:23.571732 1155232 client.go:168] LocalClient.Create starting
	I0731 21:50:23.571778 1155232 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem
	I0731 21:50:23.571822 1155232 main.go:141] libmachine: Decoding PEM data...
	I0731 21:50:23.571844 1155232 main.go:141] libmachine: Parsing certificate...
	I0731 21:50:23.571915 1155232 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem
	I0731 21:50:23.571943 1155232 main.go:141] libmachine: Decoding PEM data...
	I0731 21:50:23.571959 1155232 main.go:141] libmachine: Parsing certificate...
	I0731 21:50:23.571984 1155232 main.go:141] libmachine: Running pre-create checks...
	I0731 21:50:23.572000 1155232 main.go:141] libmachine: (calico-605794) Calling .PreCreateCheck
	I0731 21:50:23.572434 1155232 main.go:141] libmachine: (calico-605794) Calling .GetConfigRaw
	I0731 21:50:23.572930 1155232 main.go:141] libmachine: Creating machine...
	I0731 21:50:23.572949 1155232 main.go:141] libmachine: (calico-605794) Calling .Create
	I0731 21:50:23.573131 1155232 main.go:141] libmachine: (calico-605794) Creating KVM machine...
	I0731 21:50:23.574535 1155232 main.go:141] libmachine: (calico-605794) DBG | found existing default KVM network
	I0731 21:50:23.576219 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:23.575998 1155453 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b1:2b:9e} reservation:<nil>}
	I0731 21:50:23.577332 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:23.577237 1155453 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6e:b2:f0} reservation:<nil>}
	I0731 21:50:23.578428 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:23.578343 1155453 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f6:77:84} reservation:<nil>}
	I0731 21:50:23.579630 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:23.579518 1155453 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030d5c0}
	I0731 21:50:23.579660 1155232 main.go:141] libmachine: (calico-605794) DBG | created network xml: 
	I0731 21:50:23.579674 1155232 main.go:141] libmachine: (calico-605794) DBG | <network>
	I0731 21:50:23.579686 1155232 main.go:141] libmachine: (calico-605794) DBG |   <name>mk-calico-605794</name>
	I0731 21:50:23.579695 1155232 main.go:141] libmachine: (calico-605794) DBG |   <dns enable='no'/>
	I0731 21:50:23.579704 1155232 main.go:141] libmachine: (calico-605794) DBG |   
	I0731 21:50:23.579714 1155232 main.go:141] libmachine: (calico-605794) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0731 21:50:23.579724 1155232 main.go:141] libmachine: (calico-605794) DBG |     <dhcp>
	I0731 21:50:23.579736 1155232 main.go:141] libmachine: (calico-605794) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0731 21:50:23.579745 1155232 main.go:141] libmachine: (calico-605794) DBG |     </dhcp>
	I0731 21:50:23.579784 1155232 main.go:141] libmachine: (calico-605794) DBG |   </ip>
	I0731 21:50:23.579809 1155232 main.go:141] libmachine: (calico-605794) DBG |   
	I0731 21:50:23.579818 1155232 main.go:141] libmachine: (calico-605794) DBG | </network>
	I0731 21:50:23.579822 1155232 main.go:141] libmachine: (calico-605794) DBG | 
	I0731 21:50:23.585107 1155232 main.go:141] libmachine: (calico-605794) DBG | trying to create private KVM network mk-calico-605794 192.168.72.0/24...
	I0731 21:50:23.658913 1155232 main.go:141] libmachine: (calico-605794) DBG | private KVM network mk-calico-605794 192.168.72.0/24 created
	I0731 21:50:23.658960 1155232 main.go:141] libmachine: (calico-605794) Setting up store path in /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794 ...
	I0731 21:50:23.658977 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:23.658842 1155453 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:50:23.659004 1155232 main.go:141] libmachine: (calico-605794) Building disk image from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 21:50:23.659024 1155232 main.go:141] libmachine: (calico-605794) Downloading /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 21:50:23.956730 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:23.956578 1155453 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/id_rsa...
	I0731 21:50:24.094999 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:24.094854 1155453 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/calico-605794.rawdisk...
	I0731 21:50:24.095045 1155232 main.go:141] libmachine: (calico-605794) DBG | Writing magic tar header
	I0731 21:50:24.095059 1155232 main.go:141] libmachine: (calico-605794) DBG | Writing SSH key tar header
	I0731 21:50:24.095071 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:24.095015 1155453 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794 ...
	I0731 21:50:24.095162 1155232 main.go:141] libmachine: (calico-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794
	I0731 21:50:24.095183 1155232 main.go:141] libmachine: (calico-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794 (perms=drwx------)
	I0731 21:50:24.095194 1155232 main.go:141] libmachine: (calico-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines
	I0731 21:50:24.095229 1155232 main.go:141] libmachine: (calico-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube/machines (perms=drwxr-xr-x)
	I0731 21:50:24.095258 1155232 main.go:141] libmachine: (calico-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692/.minikube (perms=drwxr-xr-x)
	I0731 21:50:24.095267 1155232 main.go:141] libmachine: (calico-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:50:24.095275 1155232 main.go:141] libmachine: (calico-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19360-1093692
	I0731 21:50:24.095283 1155232 main.go:141] libmachine: (calico-605794) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 21:50:24.095290 1155232 main.go:141] libmachine: (calico-605794) DBG | Checking permissions on dir: /home/jenkins
	I0731 21:50:24.095298 1155232 main.go:141] libmachine: (calico-605794) DBG | Checking permissions on dir: /home
	I0731 21:50:24.095309 1155232 main.go:141] libmachine: (calico-605794) DBG | Skipping /home - not owner
	I0731 21:50:24.095324 1155232 main.go:141] libmachine: (calico-605794) Setting executable bit set on /home/jenkins/minikube-integration/19360-1093692 (perms=drwxrwxr-x)
	I0731 21:50:24.095338 1155232 main.go:141] libmachine: (calico-605794) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 21:50:24.095355 1155232 main.go:141] libmachine: (calico-605794) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 21:50:24.095362 1155232 main.go:141] libmachine: (calico-605794) Creating domain...
	I0731 21:50:24.096533 1155232 main.go:141] libmachine: (calico-605794) define libvirt domain using xml: 
	I0731 21:50:24.096558 1155232 main.go:141] libmachine: (calico-605794) <domain type='kvm'>
	I0731 21:50:24.096568 1155232 main.go:141] libmachine: (calico-605794)   <name>calico-605794</name>
	I0731 21:50:24.096577 1155232 main.go:141] libmachine: (calico-605794)   <memory unit='MiB'>3072</memory>
	I0731 21:50:24.096585 1155232 main.go:141] libmachine: (calico-605794)   <vcpu>2</vcpu>
	I0731 21:50:24.096595 1155232 main.go:141] libmachine: (calico-605794)   <features>
	I0731 21:50:24.096604 1155232 main.go:141] libmachine: (calico-605794)     <acpi/>
	I0731 21:50:24.096613 1155232 main.go:141] libmachine: (calico-605794)     <apic/>
	I0731 21:50:24.096621 1155232 main.go:141] libmachine: (calico-605794)     <pae/>
	I0731 21:50:24.096632 1155232 main.go:141] libmachine: (calico-605794)     
	I0731 21:50:24.096672 1155232 main.go:141] libmachine: (calico-605794)   </features>
	I0731 21:50:24.096708 1155232 main.go:141] libmachine: (calico-605794)   <cpu mode='host-passthrough'>
	I0731 21:50:24.096722 1155232 main.go:141] libmachine: (calico-605794)   
	I0731 21:50:24.096733 1155232 main.go:141] libmachine: (calico-605794)   </cpu>
	I0731 21:50:24.096758 1155232 main.go:141] libmachine: (calico-605794)   <os>
	I0731 21:50:24.096774 1155232 main.go:141] libmachine: (calico-605794)     <type>hvm</type>
	I0731 21:50:24.096783 1155232 main.go:141] libmachine: (calico-605794)     <boot dev='cdrom'/>
	I0731 21:50:24.096792 1155232 main.go:141] libmachine: (calico-605794)     <boot dev='hd'/>
	I0731 21:50:24.096802 1155232 main.go:141] libmachine: (calico-605794)     <bootmenu enable='no'/>
	I0731 21:50:24.096816 1155232 main.go:141] libmachine: (calico-605794)   </os>
	I0731 21:50:24.096828 1155232 main.go:141] libmachine: (calico-605794)   <devices>
	I0731 21:50:24.096838 1155232 main.go:141] libmachine: (calico-605794)     <disk type='file' device='cdrom'>
	I0731 21:50:24.096852 1155232 main.go:141] libmachine: (calico-605794)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/boot2docker.iso'/>
	I0731 21:50:24.096862 1155232 main.go:141] libmachine: (calico-605794)       <target dev='hdc' bus='scsi'/>
	I0731 21:50:24.096870 1155232 main.go:141] libmachine: (calico-605794)       <readonly/>
	I0731 21:50:24.096882 1155232 main.go:141] libmachine: (calico-605794)     </disk>
	I0731 21:50:24.096900 1155232 main.go:141] libmachine: (calico-605794)     <disk type='file' device='disk'>
	I0731 21:50:24.096914 1155232 main.go:141] libmachine: (calico-605794)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 21:50:24.096929 1155232 main.go:141] libmachine: (calico-605794)       <source file='/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/calico-605794.rawdisk'/>
	I0731 21:50:24.096940 1155232 main.go:141] libmachine: (calico-605794)       <target dev='hda' bus='virtio'/>
	I0731 21:50:24.096950 1155232 main.go:141] libmachine: (calico-605794)     </disk>
	I0731 21:50:24.096966 1155232 main.go:141] libmachine: (calico-605794)     <interface type='network'>
	I0731 21:50:24.096985 1155232 main.go:141] libmachine: (calico-605794)       <source network='mk-calico-605794'/>
	I0731 21:50:24.096993 1155232 main.go:141] libmachine: (calico-605794)       <model type='virtio'/>
	I0731 21:50:24.097000 1155232 main.go:141] libmachine: (calico-605794)     </interface>
	I0731 21:50:24.097014 1155232 main.go:141] libmachine: (calico-605794)     <interface type='network'>
	I0731 21:50:24.097026 1155232 main.go:141] libmachine: (calico-605794)       <source network='default'/>
	I0731 21:50:24.097035 1155232 main.go:141] libmachine: (calico-605794)       <model type='virtio'/>
	I0731 21:50:24.097043 1155232 main.go:141] libmachine: (calico-605794)     </interface>
	I0731 21:50:24.097062 1155232 main.go:141] libmachine: (calico-605794)     <serial type='pty'>
	I0731 21:50:24.097077 1155232 main.go:141] libmachine: (calico-605794)       <target port='0'/>
	I0731 21:50:24.097086 1155232 main.go:141] libmachine: (calico-605794)     </serial>
	I0731 21:50:24.097097 1155232 main.go:141] libmachine: (calico-605794)     <console type='pty'>
	I0731 21:50:24.097106 1155232 main.go:141] libmachine: (calico-605794)       <target type='serial' port='0'/>
	I0731 21:50:24.097114 1155232 main.go:141] libmachine: (calico-605794)     </console>
	I0731 21:50:24.097121 1155232 main.go:141] libmachine: (calico-605794)     <rng model='virtio'>
	I0731 21:50:24.097129 1155232 main.go:141] libmachine: (calico-605794)       <backend model='random'>/dev/random</backend>
	I0731 21:50:24.097137 1155232 main.go:141] libmachine: (calico-605794)     </rng>
	I0731 21:50:24.097151 1155232 main.go:141] libmachine: (calico-605794)     
	I0731 21:50:24.097160 1155232 main.go:141] libmachine: (calico-605794)     
	I0731 21:50:24.097171 1155232 main.go:141] libmachine: (calico-605794)   </devices>
	I0731 21:50:24.097182 1155232 main.go:141] libmachine: (calico-605794) </domain>
	I0731 21:50:24.097191 1155232 main.go:141] libmachine: (calico-605794) 
	I0731 21:50:24.104730 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:c4:61:20 in network default
	I0731 21:50:24.105519 1155232 main.go:141] libmachine: (calico-605794) Ensuring networks are active...
	I0731 21:50:24.105547 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:24.106261 1155232 main.go:141] libmachine: (calico-605794) Ensuring network default is active
	I0731 21:50:24.106566 1155232 main.go:141] libmachine: (calico-605794) Ensuring network mk-calico-605794 is active
	I0731 21:50:24.107023 1155232 main.go:141] libmachine: (calico-605794) Getting domain xml...
	I0731 21:50:24.107751 1155232 main.go:141] libmachine: (calico-605794) Creating domain...
	I0731 21:50:23.661522 1155156 ssh_runner.go:195] Run: systemctl --version
	I0731 21:50:23.668474 1155156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:50:23.834582 1155156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:50:23.842253 1155156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:50:23.842322 1155156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:50:23.865219 1155156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:50:23.865249 1155156 start.go:495] detecting cgroup driver to use...
	I0731 21:50:23.865335 1155156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:50:23.881311 1155156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:50:23.895220 1155156 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:50:23.895284 1155156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:50:23.909113 1155156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:50:23.923592 1155156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:50:24.053319 1155156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:50:24.226518 1155156 docker.go:233] disabling docker service ...
	I0731 21:50:24.226606 1155156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:50:24.241114 1155156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:50:24.254099 1155156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:50:24.387356 1155156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:50:24.530789 1155156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:50:24.544578 1155156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:50:24.562961 1155156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:50:24.563024 1155156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:24.574471 1155156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:50:24.574568 1155156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:24.585736 1155156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:24.597408 1155156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:24.609425 1155156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:50:24.620215 1155156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:24.630690 1155156 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:24.648046 1155156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:24.658734 1155156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:50:24.669731 1155156 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:50:24.669806 1155156 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:50:24.684394 1155156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:50:24.695576 1155156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:50:24.829500 1155156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:50:24.973231 1155156 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:50:24.973310 1155156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:50:24.978149 1155156 start.go:563] Will wait 60s for crictl version
	I0731 21:50:24.978220 1155156 ssh_runner.go:195] Run: which crictl
	I0731 21:50:24.981963 1155156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:50:25.019224 1155156 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:50:25.019318 1155156 ssh_runner.go:195] Run: crio --version
	I0731 21:50:25.047372 1155156 ssh_runner.go:195] Run: crio --version
	I0731 21:50:25.083680 1155156 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:50:25.084967 1155156 main.go:141] libmachine: (auto-605794) Calling .GetIP
	I0731 21:50:25.089600 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:25.090249 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:25.090274 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:25.090594 1155156 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:50:25.095039 1155156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:50:25.110347 1155156 kubeadm.go:883] updating cluster {Name:auto-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:auto-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:50:25.110472 1155156 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:50:25.110513 1155156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:50:25.156323 1155156 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:50:25.156397 1155156 ssh_runner.go:195] Run: which lz4
	I0731 21:50:25.160979 1155156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:50:25.166830 1155156 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:50:25.166879 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:50:26.574278 1155156 crio.go:462] duration metric: took 1.413343492s to copy over tarball
	I0731 21:50:26.574372 1155156 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:50:25.474403 1155232 main.go:141] libmachine: (calico-605794) Waiting to get IP...
	I0731 21:50:25.475590 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:25.476185 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:25.476212 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:25.476166 1155453 retry.go:31] will retry after 234.6276ms: waiting for machine to come up
	I0731 21:50:25.713065 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:25.713734 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:25.713769 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:25.713691 1155453 retry.go:31] will retry after 251.381971ms: waiting for machine to come up
	I0731 21:50:25.967408 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:25.967988 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:25.968019 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:25.967949 1155453 retry.go:31] will retry after 359.64133ms: waiting for machine to come up
	I0731 21:50:26.329440 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:26.330045 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:26.330083 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:26.329955 1155453 retry.go:31] will retry after 401.495899ms: waiting for machine to come up
	I0731 21:50:26.733580 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:26.734127 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:26.734167 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:26.734072 1155453 retry.go:31] will retry after 747.767701ms: waiting for machine to come up
	I0731 21:50:27.483048 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:27.483512 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:27.483554 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:27.483469 1155453 retry.go:31] will retry after 721.251152ms: waiting for machine to come up
	I0731 21:50:28.206163 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:28.206614 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:28.206644 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:28.206571 1155453 retry.go:31] will retry after 794.324397ms: waiting for machine to come up
	I0731 21:50:29.003273 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:29.003885 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:29.003916 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:29.003847 1155453 retry.go:31] will retry after 1.364031787s: waiting for machine to come up
	I0731 21:50:29.144176 1155156 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.569764691s)
	I0731 21:50:29.144216 1155156 crio.go:469] duration metric: took 2.569898318s to extract the tarball
	I0731 21:50:29.144229 1155156 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:50:29.195784 1155156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:50:29.242780 1155156 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:50:29.242809 1155156 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:50:29.242819 1155156 kubeadm.go:934] updating node { 192.168.61.197 8443 v1.30.3 crio true true} ...
	I0731 21:50:29.242942 1155156 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-605794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:auto-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:50:29.243033 1155156 ssh_runner.go:195] Run: crio config
	I0731 21:50:29.290892 1155156 cni.go:84] Creating CNI manager for ""
	I0731 21:50:29.290924 1155156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:50:29.290937 1155156 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:50:29.290968 1155156 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.197 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-605794 NodeName:auto-605794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:50:29.291149 1155156 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-605794"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:50:29.291226 1155156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:50:29.301804 1155156 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:50:29.301895 1155156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:50:29.313030 1155156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0731 21:50:29.329702 1155156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:50:29.346338 1155156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0731 21:50:29.364614 1155156 ssh_runner.go:195] Run: grep 192.168.61.197	control-plane.minikube.internal$ /etc/hosts
	I0731 21:50:29.368637 1155156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.197	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:50:29.380715 1155156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:50:29.504850 1155156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:50:29.523827 1155156 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794 for IP: 192.168.61.197
	I0731 21:50:29.523858 1155156 certs.go:194] generating shared ca certs ...
	I0731 21:50:29.523880 1155156 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:29.524054 1155156 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:50:29.524123 1155156 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:50:29.524158 1155156 certs.go:256] generating profile certs ...
	I0731 21:50:29.524242 1155156 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/client.key
	I0731 21:50:29.524258 1155156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/client.crt with IP's: []
	I0731 21:50:29.806065 1155156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/client.crt ...
	I0731 21:50:29.806097 1155156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/client.crt: {Name:mk1a4498f3bf68fdb38315d6219c26aa357772c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:29.806307 1155156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/client.key ...
	I0731 21:50:29.806327 1155156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/client.key: {Name:mkfa07df8c72eee697bc651dfd339d5ddc34b273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:29.806466 1155156 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.key.27a59ef3
	I0731 21:50:29.806490 1155156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.crt.27a59ef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.197]
	I0731 21:50:30.079121 1155156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.crt.27a59ef3 ...
	I0731 21:50:30.079156 1155156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.crt.27a59ef3: {Name:mk3d52f97f32c8932e22e59758b5038d2e937169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:30.079351 1155156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.key.27a59ef3 ...
	I0731 21:50:30.079367 1155156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.key.27a59ef3: {Name:mke2338abd6de47d4189bf11ca7ba1b207d067c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:30.079482 1155156 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.crt.27a59ef3 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.crt
	I0731 21:50:30.079561 1155156 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.key.27a59ef3 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.key
	I0731 21:50:30.079617 1155156 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/proxy-client.key
	I0731 21:50:30.079628 1155156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/proxy-client.crt with IP's: []
	I0731 21:50:30.238700 1155156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/proxy-client.crt ...
	I0731 21:50:30.238745 1155156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/proxy-client.crt: {Name:mkf58ee197d6dab43630932e751a03c42e5396fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:30.238972 1155156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/proxy-client.key ...
	I0731 21:50:30.238993 1155156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/proxy-client.key: {Name:mk46e76b16c576ad3aac34e0657fd99844f6a133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:30.239198 1155156 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:50:30.239238 1155156 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:50:30.239248 1155156 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:50:30.239271 1155156 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:50:30.239301 1155156 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:50:30.239332 1155156 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:50:30.239388 1155156 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:50:30.240078 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:50:30.271058 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:50:30.307497 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:50:30.343490 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:50:30.370997 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0731 21:50:30.395206 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:50:30.421587 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:50:30.445920 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/auto-605794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:50:30.472385 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:50:30.496126 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:50:30.520842 1155156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:50:30.545481 1155156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:50:30.565325 1155156 ssh_runner.go:195] Run: openssl version
	I0731 21:50:30.571088 1155156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:50:30.582472 1155156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:50:30.587090 1155156 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:50:30.587171 1155156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:50:30.592928 1155156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:50:30.603878 1155156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:50:30.615293 1155156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:50:30.619635 1155156 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:50:30.619715 1155156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:50:30.625690 1155156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:50:30.636489 1155156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:50:30.647239 1155156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:50:30.651816 1155156 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:50:30.651890 1155156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:50:30.657571 1155156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:50:30.668499 1155156 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:50:30.673036 1155156 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 21:50:30.673103 1155156 kubeadm.go:392] StartCluster: {Name:auto-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:auto-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:50:30.673210 1155156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:50:30.673270 1155156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:50:30.711297 1155156 cri.go:89] found id: ""
	I0731 21:50:30.711394 1155156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:50:30.722043 1155156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:50:30.731870 1155156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:50:30.742310 1155156 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:50:30.742341 1155156 kubeadm.go:157] found existing configuration files:
	
	I0731 21:50:30.742403 1155156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:50:30.751401 1155156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:50:30.751467 1155156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:50:30.761569 1155156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:50:30.771232 1155156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:50:30.771295 1155156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:50:30.781941 1155156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:50:30.791577 1155156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:50:30.791672 1155156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:50:30.801461 1155156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:50:30.810412 1155156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:50:30.810507 1155156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:50:30.820319 1155156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:50:31.021296 1155156 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:50:30.369467 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:30.369971 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:30.370015 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:30.369936 1155453 retry.go:31] will retry after 1.48893462s: waiting for machine to come up
	I0731 21:50:31.860295 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:31.860791 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:31.860819 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:31.860747 1155453 retry.go:31] will retry after 1.993150774s: waiting for machine to come up
	I0731 21:50:33.855105 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:33.855660 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:33.855694 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:33.855621 1155453 retry.go:31] will retry after 2.648468027s: waiting for machine to come up
	I0731 21:50:36.506068 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:36.506542 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:36.506578 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:36.506476 1155453 retry.go:31] will retry after 3.407550661s: waiting for machine to come up
	I0731 21:50:40.471175 1155156 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:50:40.471247 1155156 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:50:40.471393 1155156 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:50:40.471557 1155156 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:50:40.471654 1155156 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:50:40.471729 1155156 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:50:40.473378 1155156 out.go:204]   - Generating certificates and keys ...
	I0731 21:50:40.473463 1155156 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:50:40.473537 1155156 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:50:40.473639 1155156 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 21:50:40.473705 1155156 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 21:50:40.473783 1155156 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 21:50:40.473853 1155156 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 21:50:40.473931 1155156 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 21:50:40.474086 1155156 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-605794 localhost] and IPs [192.168.61.197 127.0.0.1 ::1]
	I0731 21:50:40.474156 1155156 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 21:50:40.474336 1155156 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-605794 localhost] and IPs [192.168.61.197 127.0.0.1 ::1]
	I0731 21:50:40.474417 1155156 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 21:50:40.474470 1155156 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 21:50:40.474513 1155156 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 21:50:40.474561 1155156 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:50:40.474609 1155156 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:50:40.474664 1155156 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:50:40.474726 1155156 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:50:40.474778 1155156 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:50:40.474836 1155156 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:50:40.474907 1155156 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:50:40.474962 1155156 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:50:40.476562 1155156 out.go:204]   - Booting up control plane ...
	I0731 21:50:40.476643 1155156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:50:40.476708 1155156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:50:40.476766 1155156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:50:40.476857 1155156 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:50:40.476943 1155156 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:50:40.476988 1155156 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:50:40.477103 1155156 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:50:40.477164 1155156 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:50:40.477218 1155156 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001638829s
	I0731 21:50:40.477276 1155156 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:50:40.477323 1155156 kubeadm.go:310] [api-check] The API server is healthy after 4.50210749s
	I0731 21:50:40.477431 1155156 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:50:40.477556 1155156 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:50:40.477620 1155156 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:50:40.477778 1155156 kubeadm.go:310] [mark-control-plane] Marking the node auto-605794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:50:40.477832 1155156 kubeadm.go:310] [bootstrap-token] Using token: 4xwep0.tg9ezuqyy6nq36pu
	I0731 21:50:40.479123 1155156 out.go:204]   - Configuring RBAC rules ...
	I0731 21:50:40.479205 1155156 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:50:40.479276 1155156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:50:40.479411 1155156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:50:40.479529 1155156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:50:40.479638 1155156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:50:40.479717 1155156 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:50:40.479819 1155156 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:50:40.479865 1155156 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:50:40.479906 1155156 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:50:40.479912 1155156 kubeadm.go:310] 
	I0731 21:50:40.479969 1155156 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:50:40.479976 1155156 kubeadm.go:310] 
	I0731 21:50:40.480037 1155156 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:50:40.480043 1155156 kubeadm.go:310] 
	I0731 21:50:40.480103 1155156 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:50:40.480184 1155156 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:50:40.480249 1155156 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:50:40.480257 1155156 kubeadm.go:310] 
	I0731 21:50:40.480306 1155156 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:50:40.480311 1155156 kubeadm.go:310] 
	I0731 21:50:40.480350 1155156 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:50:40.480356 1155156 kubeadm.go:310] 
	I0731 21:50:40.480400 1155156 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:50:40.480465 1155156 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:50:40.480522 1155156 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:50:40.480529 1155156 kubeadm.go:310] 
	I0731 21:50:40.480614 1155156 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:50:40.480712 1155156 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:50:40.480726 1155156 kubeadm.go:310] 
	I0731 21:50:40.480815 1155156 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4xwep0.tg9ezuqyy6nq36pu \
	I0731 21:50:40.480940 1155156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:50:40.480990 1155156 kubeadm.go:310] 	--control-plane 
	I0731 21:50:40.481010 1155156 kubeadm.go:310] 
	I0731 21:50:40.481094 1155156 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:50:40.481101 1155156 kubeadm.go:310] 
	I0731 21:50:40.481198 1155156 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4xwep0.tg9ezuqyy6nq36pu \
	I0731 21:50:40.481360 1155156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:50:40.481380 1155156 cni.go:84] Creating CNI manager for ""
	I0731 21:50:40.481402 1155156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:50:40.483086 1155156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:50:40.484344 1155156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:50:40.495640 1155156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:50:40.516388 1155156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:50:40.516530 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:40.516570 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-605794 minikube.k8s.io/updated_at=2024_07_31T21_50_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=auto-605794 minikube.k8s.io/primary=true
	I0731 21:50:40.541302 1155156 ops.go:34] apiserver oom_adj: -16
	I0731 21:50:40.615661 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:41.116599 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:41.616232 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:42.116404 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:42.615878 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:43.115803 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:39.915946 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:39.916523 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:39.916555 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:39.916459 1155453 retry.go:31] will retry after 3.306268286s: waiting for machine to come up
	I0731 21:50:43.225823 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:43.226320 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find current IP address of domain calico-605794 in network mk-calico-605794
	I0731 21:50:43.226350 1155232 main.go:141] libmachine: (calico-605794) DBG | I0731 21:50:43.226285 1155453 retry.go:31] will retry after 5.496298506s: waiting for machine to come up
	I0731 21:50:43.616231 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:44.115835 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:44.615940 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:45.116669 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:45.616740 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:46.116617 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:46.615754 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:47.115702 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:47.616415 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:48.116600 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:48.724708 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:48.725248 1155232 main.go:141] libmachine: (calico-605794) Found IP for machine: 192.168.72.131
	I0731 21:50:48.725278 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has current primary IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:48.725287 1155232 main.go:141] libmachine: (calico-605794) Reserving static IP address...
	I0731 21:50:48.725685 1155232 main.go:141] libmachine: (calico-605794) DBG | unable to find host DHCP lease matching {name: "calico-605794", mac: "52:54:00:d5:90:16", ip: "192.168.72.131"} in network mk-calico-605794
	I0731 21:50:48.812435 1155232 main.go:141] libmachine: (calico-605794) DBG | Getting to WaitForSSH function...
	I0731 21:50:48.812490 1155232 main.go:141] libmachine: (calico-605794) Reserved static IP address: 192.168.72.131
	I0731 21:50:48.812500 1155232 main.go:141] libmachine: (calico-605794) Waiting for SSH to be available...
	I0731 21:50:48.815541 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:48.816025 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:48.816047 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:48.816228 1155232 main.go:141] libmachine: (calico-605794) DBG | Using SSH client type: external
	I0731 21:50:48.816255 1155232 main.go:141] libmachine: (calico-605794) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/id_rsa (-rw-------)
	I0731 21:50:48.816289 1155232 main.go:141] libmachine: (calico-605794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:50:48.816301 1155232 main.go:141] libmachine: (calico-605794) DBG | About to run SSH command:
	I0731 21:50:48.816312 1155232 main.go:141] libmachine: (calico-605794) DBG | exit 0
	I0731 21:50:48.948359 1155232 main.go:141] libmachine: (calico-605794) DBG | SSH cmd err, output: <nil>: 
	I0731 21:50:48.948642 1155232 main.go:141] libmachine: (calico-605794) KVM machine creation complete!
	I0731 21:50:48.949018 1155232 main.go:141] libmachine: (calico-605794) Calling .GetConfigRaw
	I0731 21:50:48.949600 1155232 main.go:141] libmachine: (calico-605794) Calling .DriverName
	I0731 21:50:48.949827 1155232 main.go:141] libmachine: (calico-605794) Calling .DriverName
	I0731 21:50:48.950046 1155232 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 21:50:48.950060 1155232 main.go:141] libmachine: (calico-605794) Calling .GetState
	I0731 21:50:48.951951 1155232 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 21:50:48.951974 1155232 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 21:50:48.951981 1155232 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 21:50:48.951990 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:48.954552 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:48.954893 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:48.954918 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:48.955154 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:48.955364 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:48.955567 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:48.955716 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:48.955896 1155232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:48.956130 1155232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.131 22 <nil> <nil>}
	I0731 21:50:48.956141 1155232 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 21:50:49.075513 1155232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:50:49.075542 1155232 main.go:141] libmachine: Detecting the provisioner...
	I0731 21:50:49.075555 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:49.078626 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.079069 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:49.079101 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.079280 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:49.079497 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.079706 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.079837 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:49.080012 1155232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:49.080265 1155232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.131 22 <nil> <nil>}
	I0731 21:50:49.080283 1155232 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 21:50:49.193586 1155232 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 21:50:49.193730 1155232 main.go:141] libmachine: found compatible host: buildroot
	I0731 21:50:49.193747 1155232 main.go:141] libmachine: Provisioning with buildroot...
	I0731 21:50:49.193758 1155232 main.go:141] libmachine: (calico-605794) Calling .GetMachineName
	I0731 21:50:49.194042 1155232 buildroot.go:166] provisioning hostname "calico-605794"
	I0731 21:50:49.194068 1155232 main.go:141] libmachine: (calico-605794) Calling .GetMachineName
	I0731 21:50:49.194277 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:49.197212 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.197637 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:49.197668 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.197744 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:49.197998 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.198187 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.198364 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:49.198551 1155232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:49.198724 1155232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.131 22 <nil> <nil>}
	I0731 21:50:49.198736 1155232 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-605794 && echo "calico-605794" | sudo tee /etc/hostname
	I0731 21:50:49.323376 1155232 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-605794
	
	I0731 21:50:49.323404 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:49.326492 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.326931 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:49.326960 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.327121 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:49.327365 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.327508 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.327608 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:49.327755 1155232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:49.327920 1155232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.131 22 <nil> <nil>}
	I0731 21:50:49.327941 1155232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-605794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-605794/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-605794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:50:49.444766 1155232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:50:49.444806 1155232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:50:49.444875 1155232 buildroot.go:174] setting up certificates
	I0731 21:50:49.444894 1155232 provision.go:84] configureAuth start
	I0731 21:50:49.444910 1155232 main.go:141] libmachine: (calico-605794) Calling .GetMachineName
	I0731 21:50:49.445270 1155232 main.go:141] libmachine: (calico-605794) Calling .GetIP
	I0731 21:50:49.448275 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.448724 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:49.448747 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.449004 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:49.451497 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.451918 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:49.451951 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.452132 1155232 provision.go:143] copyHostCerts
	I0731 21:50:49.452199 1155232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:50:49.452214 1155232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:50:49.452288 1155232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:50:49.452430 1155232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:50:49.452440 1155232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:50:49.452470 1155232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:50:49.452577 1155232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:50:49.452588 1155232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:50:49.452614 1155232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:50:49.452717 1155232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.calico-605794 san=[127.0.0.1 192.168.72.131 calico-605794 localhost minikube]
	I0731 21:50:49.684641 1155232 provision.go:177] copyRemoteCerts
	I0731 21:50:49.684708 1155232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:50:49.684737 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:49.688016 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.688502 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:49.688541 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.688707 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:49.688930 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.689127 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:49.689309 1155232 sshutil.go:53] new ssh client: &{IP:192.168.72.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/id_rsa Username:docker}
	I0731 21:50:49.774157 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:50:49.798305 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 21:50:49.823201 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:50:49.847310 1155232 provision.go:87] duration metric: took 402.3975ms to configureAuth
	I0731 21:50:49.847348 1155232 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:50:49.847575 1155232 config.go:182] Loaded profile config "calico-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:50:49.847665 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:49.851597 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.852018 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:49.852052 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:49.852311 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:49.852564 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.852747 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:49.852922 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:49.853164 1155232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:49.853332 1155232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.131 22 <nil> <nil>}
	I0731 21:50:49.853347 1155232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:50:50.118104 1155232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:50:50.118136 1155232 main.go:141] libmachine: Checking connection to Docker...
	I0731 21:50:50.118147 1155232 main.go:141] libmachine: (calico-605794) Calling .GetURL
	I0731 21:50:50.119540 1155232 main.go:141] libmachine: (calico-605794) DBG | Using libvirt version 6000000
	I0731 21:50:50.122170 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.122576 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:50.122610 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.122835 1155232 main.go:141] libmachine: Docker is up and running!
	I0731 21:50:50.122851 1155232 main.go:141] libmachine: Reticulating splines...
	I0731 21:50:50.122859 1155232 client.go:171] duration metric: took 26.551116351s to LocalClient.Create
	I0731 21:50:50.122885 1155232 start.go:167] duration metric: took 26.551192587s to libmachine.API.Create "calico-605794"
	I0731 21:50:50.122900 1155232 start.go:293] postStartSetup for "calico-605794" (driver="kvm2")
	I0731 21:50:50.122913 1155232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:50:50.122937 1155232 main.go:141] libmachine: (calico-605794) Calling .DriverName
	I0731 21:50:50.123270 1155232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:50:50.123303 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:50.126254 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.126702 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:50.126740 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.127043 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:50.127303 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:50.127518 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:50.127721 1155232 sshutil.go:53] new ssh client: &{IP:192.168.72.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/id_rsa Username:docker}
	I0731 21:50:50.224038 1155232 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:50:50.228858 1155232 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:50:50.228894 1155232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:50:50.228995 1155232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:50:50.229104 1155232 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:50:50.229207 1155232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:50:50.239232 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:50:50.264430 1155232 start.go:296] duration metric: took 141.513289ms for postStartSetup
	I0731 21:50:50.264510 1155232 main.go:141] libmachine: (calico-605794) Calling .GetConfigRaw
	I0731 21:50:50.265210 1155232 main.go:141] libmachine: (calico-605794) Calling .GetIP
	I0731 21:50:50.268359 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.268716 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:50.268745 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.269059 1155232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/config.json ...
	I0731 21:50:50.269301 1155232 start.go:128] duration metric: took 26.720316904s to createHost
	I0731 21:50:50.269329 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:50.271773 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.272139 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:50.272168 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.272348 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:50.272547 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:50.272733 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:50.272920 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:50.273103 1155232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:50:50.273270 1155232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.131 22 <nil> <nil>}
	I0731 21:50:50.273279 1155232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:50:50.380729 1155232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722462650.359183123
	
	I0731 21:50:50.380767 1155232 fix.go:216] guest clock: 1722462650.359183123
	I0731 21:50:50.380777 1155232 fix.go:229] Guest: 2024-07-31 21:50:50.359183123 +0000 UTC Remote: 2024-07-31 21:50:50.269316234 +0000 UTC m=+50.984780064 (delta=89.866889ms)
	I0731 21:50:50.380804 1155232 fix.go:200] guest clock delta is within tolerance: 89.866889ms
	I0731 21:50:50.380812 1155232 start.go:83] releasing machines lock for "calico-605794", held for 26.83201228s
	I0731 21:50:50.380839 1155232 main.go:141] libmachine: (calico-605794) Calling .DriverName
	I0731 21:50:50.381148 1155232 main.go:141] libmachine: (calico-605794) Calling .GetIP
	I0731 21:50:50.384180 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.384565 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:50.384596 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.384825 1155232 main.go:141] libmachine: (calico-605794) Calling .DriverName
	I0731 21:50:50.385389 1155232 main.go:141] libmachine: (calico-605794) Calling .DriverName
	I0731 21:50:50.385564 1155232 main.go:141] libmachine: (calico-605794) Calling .DriverName
	I0731 21:50:50.385629 1155232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:50:50.385683 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:50.385798 1155232 ssh_runner.go:195] Run: cat /version.json
	I0731 21:50:50.385823 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHHostname
	I0731 21:50:50.388300 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.388751 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:50.388776 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.388794 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.388943 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:50.389147 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:50.389228 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:50.389261 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:50.389338 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:50.389461 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHPort
	I0731 21:50:50.389543 1155232 sshutil.go:53] new ssh client: &{IP:192.168.72.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/id_rsa Username:docker}
	I0731 21:50:50.389657 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHKeyPath
	I0731 21:50:50.389792 1155232 main.go:141] libmachine: (calico-605794) Calling .GetSSHUsername
	I0731 21:50:50.389939 1155232 sshutil.go:53] new ssh client: &{IP:192.168.72.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/calico-605794/id_rsa Username:docker}
	I0731 21:50:50.494548 1155232 ssh_runner.go:195] Run: systemctl --version
	I0731 21:50:50.500761 1155232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:50:50.665727 1155232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:50:50.672101 1155232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:50:50.672202 1155232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:50:50.690723 1155232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:50:50.690753 1155232 start.go:495] detecting cgroup driver to use...
	I0731 21:50:50.690820 1155232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:50:50.708468 1155232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:50:50.724168 1155232 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:50:50.724244 1155232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:50:50.739493 1155232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:50:50.755278 1155232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:50:50.882687 1155232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:50:51.033831 1155232 docker.go:233] disabling docker service ...
	I0731 21:50:51.033917 1155232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:50:51.048192 1155232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:50:51.062850 1155232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:50:51.203165 1155232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:50:51.314887 1155232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:50:51.329464 1155232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:50:51.349867 1155232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:50:51.349935 1155232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:51.361151 1155232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:50:51.361241 1155232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:51.373303 1155232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:51.385527 1155232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:51.396885 1155232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:50:51.408547 1155232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:51.419624 1155232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:51.438204 1155232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:50:51.450427 1155232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:50:51.460732 1155232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:50:51.460798 1155232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:50:51.475880 1155232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:50:51.486792 1155232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:50:51.606099 1155232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:50:51.743513 1155232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:50:51.743595 1155232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:50:51.748519 1155232 start.go:563] Will wait 60s for crictl version
	I0731 21:50:51.748601 1155232 ssh_runner.go:195] Run: which crictl
	I0731 21:50:51.752575 1155232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:50:51.791242 1155232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:50:51.791326 1155232 ssh_runner.go:195] Run: crio --version
	I0731 21:50:51.820477 1155232 ssh_runner.go:195] Run: crio --version
	I0731 21:50:51.850919 1155232 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:50:48.615747 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:49.116221 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:49.615785 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:50.116472 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:50.616718 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:51.116532 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:51.616474 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:52.116064 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:52.615797 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:53.116314 1155156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:50:53.228144 1155156 kubeadm.go:1113] duration metric: took 12.711681555s to wait for elevateKubeSystemPrivileges
	I0731 21:50:53.228183 1155156 kubeadm.go:394] duration metric: took 22.555085176s to StartCluster
	I0731 21:50:53.228209 1155156 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:53.228298 1155156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:50:53.230497 1155156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:53.230762 1155156 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:50:53.230953 1155156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 21:50:53.230988 1155156 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:50:53.231069 1155156 addons.go:69] Setting storage-provisioner=true in profile "auto-605794"
	I0731 21:50:53.231111 1155156 addons.go:234] Setting addon storage-provisioner=true in "auto-605794"
	I0731 21:50:53.231146 1155156 host.go:66] Checking if "auto-605794" exists ...
	I0731 21:50:53.231196 1155156 config.go:182] Loaded profile config "auto-605794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:50:53.231252 1155156 addons.go:69] Setting default-storageclass=true in profile "auto-605794"
	I0731 21:50:53.231292 1155156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-605794"
	I0731 21:50:53.231600 1155156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:50:53.231618 1155156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:50:53.231630 1155156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:50:53.231633 1155156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:50:53.232349 1155156 out.go:177] * Verifying Kubernetes components...
	I0731 21:50:53.233816 1155156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:50:53.251949 1155156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I0731 21:50:53.252148 1155156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0731 21:50:53.252882 1155156 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:50:53.253560 1155156 main.go:141] libmachine: Using API Version  1
	I0731 21:50:53.253577 1155156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:50:53.254026 1155156 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:50:53.254268 1155156 main.go:141] libmachine: (auto-605794) Calling .GetState
	I0731 21:50:53.255933 1155156 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:50:53.257148 1155156 main.go:141] libmachine: Using API Version  1
	I0731 21:50:53.257171 1155156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:50:53.259493 1155156 addons.go:234] Setting addon default-storageclass=true in "auto-605794"
	I0731 21:50:53.259541 1155156 host.go:66] Checking if "auto-605794" exists ...
	I0731 21:50:53.259822 1155156 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:50:53.259937 1155156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:50:53.260400 1155156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:50:53.260427 1155156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:50:53.261064 1155156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:50:53.282111 1155156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0731 21:50:53.282493 1155156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40797
	I0731 21:50:53.282722 1155156 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:50:53.283129 1155156 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:50:53.283308 1155156 main.go:141] libmachine: Using API Version  1
	I0731 21:50:53.283334 1155156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:50:53.283684 1155156 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:50:53.283831 1155156 main.go:141] libmachine: Using API Version  1
	I0731 21:50:53.283856 1155156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:50:53.283921 1155156 main.go:141] libmachine: (auto-605794) Calling .GetState
	I0731 21:50:53.284300 1155156 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:50:53.284913 1155156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:50:53.284963 1155156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:50:53.285953 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:53.287760 1155156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:50:53.289404 1155156 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:50:53.289429 1155156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:50:53.289458 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:53.294284 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:53.294912 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:53.294946 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:53.295004 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:53.295214 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:53.295432 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:53.295578 1155156 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa Username:docker}
	I0731 21:50:53.315182 1155156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46469
	I0731 21:50:53.315690 1155156 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:50:53.316433 1155156 main.go:141] libmachine: Using API Version  1
	I0731 21:50:53.316455 1155156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:50:53.319955 1155156 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:50:53.320284 1155156 main.go:141] libmachine: (auto-605794) Calling .GetState
	I0731 21:50:53.322602 1155156 main.go:141] libmachine: (auto-605794) Calling .DriverName
	I0731 21:50:53.323052 1155156 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:50:53.323073 1155156 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:50:53.323096 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHHostname
	I0731 21:50:53.326785 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:53.327470 1155156 main.go:141] libmachine: (auto-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:e7:91", ip: ""} in network mk-auto-605794: {Iface:virbr1 ExpiryTime:2024-07-31 22:50:13 +0000 UTC Type:0 Mac:52:54:00:8f:e7:91 Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:auto-605794 Clientid:01:52:54:00:8f:e7:91}
	I0731 21:50:53.327505 1155156 main.go:141] libmachine: (auto-605794) DBG | domain auto-605794 has defined IP address 192.168.61.197 and MAC address 52:54:00:8f:e7:91 in network mk-auto-605794
	I0731 21:50:53.327894 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHPort
	I0731 21:50:53.328178 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHKeyPath
	I0731 21:50:53.328377 1155156 main.go:141] libmachine: (auto-605794) Calling .GetSSHUsername
	I0731 21:50:53.328691 1155156 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/auto-605794/id_rsa Username:docker}
	I0731 21:50:53.468226 1155156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:50:53.468304 1155156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 21:50:51.852171 1155232 main.go:141] libmachine: (calico-605794) Calling .GetIP
	I0731 21:50:51.855141 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:51.855584 1155232 main.go:141] libmachine: (calico-605794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:90:16", ip: ""} in network mk-calico-605794: {Iface:virbr3 ExpiryTime:2024-07-31 22:50:38 +0000 UTC Type:0 Mac:52:54:00:d5:90:16 Iaid: IPaddr:192.168.72.131 Prefix:24 Hostname:calico-605794 Clientid:01:52:54:00:d5:90:16}
	I0731 21:50:51.855614 1155232 main.go:141] libmachine: (calico-605794) DBG | domain calico-605794 has defined IP address 192.168.72.131 and MAC address 52:54:00:d5:90:16 in network mk-calico-605794
	I0731 21:50:51.855854 1155232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:50:51.860263 1155232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:50:51.872554 1155232 kubeadm.go:883] updating cluster {Name:calico-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:calico-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.131 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:50:51.872687 1155232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:50:51.872747 1155232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:50:51.906432 1155232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:50:51.906519 1155232 ssh_runner.go:195] Run: which lz4
	I0731 21:50:51.910498 1155232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:50:51.914854 1155232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:50:51.914899 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:50:53.367409 1155232 crio.go:462] duration metric: took 1.456955177s to copy over tarball
	I0731 21:50:53.367506 1155232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:50:53.639189 1155156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:50:53.672044 1155156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:50:54.139158 1155156 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0731 21:50:54.140859 1155156 node_ready.go:35] waiting up to 15m0s for node "auto-605794" to be "Ready" ...
	I0731 21:50:54.815636 1155156 node_ready.go:49] node "auto-605794" has status "Ready":"True"
	I0731 21:50:54.815665 1155156 node_ready.go:38] duration metric: took 674.775149ms for node "auto-605794" to be "Ready" ...
	I0731 21:50:54.815684 1155156 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:50:55.346756 1155156 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-605794" context rescaled to 1 replicas
	I0731 21:50:55.350943 1155156 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-dtc2v" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:56.449170 1155156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.809937677s)
	I0731 21:50:56.449241 1155156 main.go:141] libmachine: Making call to close driver server
	I0731 21:50:56.449255 1155156 main.go:141] libmachine: (auto-605794) Calling .Close
	I0731 21:50:56.449265 1155156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.777172132s)
	I0731 21:50:56.449325 1155156 main.go:141] libmachine: Making call to close driver server
	I0731 21:50:56.449344 1155156 main.go:141] libmachine: (auto-605794) Calling .Close
	I0731 21:50:56.449724 1155156 main.go:141] libmachine: (auto-605794) DBG | Closing plugin on server side
	I0731 21:50:56.449772 1155156 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:50:56.449782 1155156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:50:56.449800 1155156 main.go:141] libmachine: Making call to close driver server
	I0731 21:50:56.449810 1155156 main.go:141] libmachine: (auto-605794) Calling .Close
	I0731 21:50:56.449822 1155156 main.go:141] libmachine: (auto-605794) DBG | Closing plugin on server side
	I0731 21:50:56.449896 1155156 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:50:56.449908 1155156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:50:56.449920 1155156 main.go:141] libmachine: Making call to close driver server
	I0731 21:50:56.449944 1155156 main.go:141] libmachine: (auto-605794) Calling .Close
	I0731 21:50:56.450224 1155156 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:50:56.450242 1155156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:50:56.451729 1155156 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:50:56.451759 1155156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:50:56.451770 1155156 main.go:141] libmachine: (auto-605794) DBG | Closing plugin on server side
	I0731 21:50:56.471226 1155156 main.go:141] libmachine: Making call to close driver server
	I0731 21:50:56.471258 1155156 main.go:141] libmachine: (auto-605794) Calling .Close
	I0731 21:50:56.471582 1155156 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:50:56.471642 1155156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:50:56.473290 1155156 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 21:50:56.474575 1155156 addons.go:510] duration metric: took 3.243588054s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 21:50:57.492815 1155156 pod_ready.go:102] pod "coredns-7db6d8ff4d-dtc2v" in "kube-system" namespace has status "Ready":"False"
	I0731 21:50:57.857935 1155156 pod_ready.go:92] pod "coredns-7db6d8ff4d-dtc2v" in "kube-system" namespace has status "Ready":"True"
	I0731 21:50:57.857959 1155156 pod_ready.go:81] duration metric: took 2.506977463s for pod "coredns-7db6d8ff4d-dtc2v" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.857969 1155156 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-skj8k" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.862836 1155156 pod_ready.go:92] pod "coredns-7db6d8ff4d-skj8k" in "kube-system" namespace has status "Ready":"True"
	I0731 21:50:57.862862 1155156 pod_ready.go:81] duration metric: took 4.887031ms for pod "coredns-7db6d8ff4d-skj8k" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.862872 1155156 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.867429 1155156 pod_ready.go:92] pod "etcd-auto-605794" in "kube-system" namespace has status "Ready":"True"
	I0731 21:50:57.867455 1155156 pod_ready.go:81] duration metric: took 4.576053ms for pod "etcd-auto-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.867470 1155156 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.871982 1155156 pod_ready.go:92] pod "kube-apiserver-auto-605794" in "kube-system" namespace has status "Ready":"True"
	I0731 21:50:57.872006 1155156 pod_ready.go:81] duration metric: took 4.529338ms for pod "kube-apiserver-auto-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.872016 1155156 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.950024 1155156 pod_ready.go:92] pod "kube-controller-manager-auto-605794" in "kube-system" namespace has status "Ready":"True"
	I0731 21:50:57.950052 1155156 pod_ready.go:81] duration metric: took 78.028761ms for pod "kube-controller-manager-auto-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:57.950066 1155156 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-25t5t" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:58.350734 1155156 pod_ready.go:92] pod "kube-proxy-25t5t" in "kube-system" namespace has status "Ready":"True"
	I0731 21:50:58.350765 1155156 pod_ready.go:81] duration metric: took 400.690658ms for pod "kube-proxy-25t5t" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:58.350779 1155156 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:56.333726 1155232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.966180059s)
	I0731 21:50:56.333757 1155232 crio.go:469] duration metric: took 2.966312009s to extract the tarball
	I0731 21:50:56.333766 1155232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:50:56.376935 1155232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:50:56.423420 1155232 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:50:56.423461 1155232 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:50:56.423472 1155232 kubeadm.go:934] updating node { 192.168.72.131 8443 v1.30.3 crio true true} ...
	I0731 21:50:56.423624 1155232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-605794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:calico-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0731 21:50:56.423707 1155232 ssh_runner.go:195] Run: crio config
	I0731 21:50:56.475118 1155232 cni.go:84] Creating CNI manager for "calico"
	I0731 21:50:56.475145 1155232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:50:56.475179 1155232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.131 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-605794 NodeName:calico-605794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.131"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.131 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:50:56.475331 1155232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.131
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-605794"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.131
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.131"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:50:56.475416 1155232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:50:56.485677 1155232 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:50:56.485767 1155232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:50:56.498932 1155232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 21:50:56.518763 1155232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:50:56.536783 1155232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 21:50:56.554581 1155232 ssh_runner.go:195] Run: grep 192.168.72.131	control-plane.minikube.internal$ /etc/hosts
	I0731 21:50:56.558800 1155232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.131	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:50:56.571794 1155232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:50:56.690982 1155232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:50:56.708769 1155232 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794 for IP: 192.168.72.131
	I0731 21:50:56.708802 1155232 certs.go:194] generating shared ca certs ...
	I0731 21:50:56.708825 1155232 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:56.709028 1155232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:50:56.709099 1155232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:50:56.709114 1155232 certs.go:256] generating profile certs ...
	I0731 21:50:56.709188 1155232 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/client.key
	I0731 21:50:56.709202 1155232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/client.crt with IP's: []
	I0731 21:50:57.030617 1155232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/client.crt ...
	I0731 21:50:57.030653 1155232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/client.crt: {Name:mk848a2833c30fa4dbd6c67c1928823ec3399809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:57.033941 1155232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/client.key ...
	I0731 21:50:57.033972 1155232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/client.key: {Name:mk7b927531a17442d87a54c6c0dac544315a836c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:57.034109 1155232 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.key.c2ad6603
	I0731 21:50:57.034134 1155232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.crt.c2ad6603 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.131]
	I0731 21:50:57.165037 1155232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.crt.c2ad6603 ...
	I0731 21:50:57.165072 1155232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.crt.c2ad6603: {Name:mk8cc7a6a532896404f2aa73e7c9624813f39c03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:57.165271 1155232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.key.c2ad6603 ...
	I0731 21:50:57.165287 1155232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.key.c2ad6603: {Name:mk9aab13d3fcf7260691826f6f9742462b8434df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:57.165382 1155232 certs.go:381] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.crt.c2ad6603 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.crt
	I0731 21:50:57.165487 1155232 certs.go:385] copying /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.key.c2ad6603 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.key
	I0731 21:50:57.165569 1155232 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/proxy-client.key
	I0731 21:50:57.165595 1155232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/proxy-client.crt with IP's: []
	I0731 21:50:57.475755 1155232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/proxy-client.crt ...
	I0731 21:50:57.475797 1155232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/proxy-client.crt: {Name:mk8bf9d5141c2cb55f8810831969a2a1f6b1da04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:57.476010 1155232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/proxy-client.key ...
	I0731 21:50:57.476032 1155232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/proxy-client.key: {Name:mk05b0f67270bee761d98354b77e2536da75782a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:50:57.476279 1155232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:50:57.476332 1155232 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:50:57.476348 1155232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:50:57.476383 1155232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:50:57.476418 1155232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:50:57.476450 1155232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:50:57.476505 1155232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:50:57.477208 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:50:57.514260 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:50:57.544270 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:50:57.573634 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:50:57.603682 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 21:50:57.628841 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:50:57.653504 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:50:57.679347 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/calico-605794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:50:57.703948 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:50:57.728055 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:50:57.752366 1155232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:50:57.776484 1155232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:50:57.793473 1155232 ssh_runner.go:195] Run: openssl version
	I0731 21:50:57.799828 1155232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:50:57.811277 1155232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:50:57.815982 1155232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:50:57.816059 1155232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:50:57.821549 1155232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:50:57.832414 1155232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:50:57.843519 1155232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:50:57.848190 1155232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:50:57.848276 1155232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:50:57.854156 1155232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:50:57.867694 1155232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:50:57.880561 1155232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:50:57.885174 1155232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:50:57.885248 1155232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:50:57.891234 1155232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:50:57.905924 1155232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:50:57.910345 1155232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 21:50:57.910419 1155232 kubeadm.go:392] StartCluster: {Name:calico-605794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:calico-605794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.131 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:50:57.910528 1155232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:50:57.910742 1155232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:50:57.954038 1155232 cri.go:89] found id: ""
	I0731 21:50:57.954118 1155232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:50:57.965498 1155232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:50:57.976850 1155232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:50:57.987275 1155232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:50:57.987300 1155232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:50:57.987351 1155232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:50:57.996979 1155232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:50:57.997090 1155232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:50:58.008357 1155232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:50:58.017991 1155232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:50:58.018075 1155232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:50:58.028272 1155232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:50:58.037787 1155232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:50:58.037864 1155232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:50:58.048233 1155232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:50:58.058072 1155232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:50:58.058157 1155232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:50:58.068293 1155232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:50:58.262124 1155232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:50:58.750378 1155156 pod_ready.go:92] pod "kube-scheduler-auto-605794" in "kube-system" namespace has status "Ready":"True"
	I0731 21:50:58.750408 1155156 pod_ready.go:81] duration metric: took 399.621711ms for pod "kube-scheduler-auto-605794" in "kube-system" namespace to be "Ready" ...
	I0731 21:50:58.750417 1155156 pod_ready.go:38] duration metric: took 3.934719321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:50:58.750434 1155156 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:50:58.750514 1155156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:50:58.765407 1155156 api_server.go:72] duration metric: took 5.534607132s to wait for apiserver process to appear ...
	I0731 21:50:58.765436 1155156 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:50:58.765461 1155156 api_server.go:253] Checking apiserver healthz at https://192.168.61.197:8443/healthz ...
	I0731 21:50:58.769838 1155156 api_server.go:279] https://192.168.61.197:8443/healthz returned 200:
	ok
	I0731 21:50:58.770879 1155156 api_server.go:141] control plane version: v1.30.3
	I0731 21:50:58.770903 1155156 api_server.go:131] duration metric: took 5.46025ms to wait for apiserver health ...
	I0731 21:50:58.770911 1155156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:50:58.952905 1155156 system_pods.go:59] 8 kube-system pods found
	I0731 21:50:58.952960 1155156 system_pods.go:61] "coredns-7db6d8ff4d-dtc2v" [fb25490c-95b9-4ddb-a0f6-5eff2ac94e52] Running
	I0731 21:50:58.952969 1155156 system_pods.go:61] "coredns-7db6d8ff4d-skj8k" [aaa3705d-988d-47b8-97ae-5f98085bd417] Running
	I0731 21:50:58.952974 1155156 system_pods.go:61] "etcd-auto-605794" [c72b2701-3ad5-46ed-a25e-13a1b7838ae0] Running
	I0731 21:50:58.952979 1155156 system_pods.go:61] "kube-apiserver-auto-605794" [d62422d0-6d43-43bc-82a6-45e62667105c] Running
	I0731 21:50:58.952984 1155156 system_pods.go:61] "kube-controller-manager-auto-605794" [174c1453-530d-4a92-9123-e0bafa233d43] Running
	I0731 21:50:58.952988 1155156 system_pods.go:61] "kube-proxy-25t5t" [ec022361-54aa-4a92-8acc-e8ce49c93fa2] Running
	I0731 21:50:58.952992 1155156 system_pods.go:61] "kube-scheduler-auto-605794" [ac18cd5e-2dad-49d5-b04e-df9e1f56dbd9] Running
	I0731 21:50:58.952997 1155156 system_pods.go:61] "storage-provisioner" [10fda435-8833-4a76-b112-3e3c3a000866] Running
	I0731 21:50:58.953008 1155156 system_pods.go:74] duration metric: took 182.090996ms to wait for pod list to return data ...
	I0731 21:50:58.953020 1155156 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:50:59.150328 1155156 default_sa.go:45] found service account: "default"
	I0731 21:50:59.150367 1155156 default_sa.go:55] duration metric: took 197.339072ms for default service account to be created ...
	I0731 21:50:59.150378 1155156 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:50:59.352207 1155156 system_pods.go:86] 8 kube-system pods found
	I0731 21:50:59.352239 1155156 system_pods.go:89] "coredns-7db6d8ff4d-dtc2v" [fb25490c-95b9-4ddb-a0f6-5eff2ac94e52] Running
	I0731 21:50:59.352246 1155156 system_pods.go:89] "coredns-7db6d8ff4d-skj8k" [aaa3705d-988d-47b8-97ae-5f98085bd417] Running
	I0731 21:50:59.352250 1155156 system_pods.go:89] "etcd-auto-605794" [c72b2701-3ad5-46ed-a25e-13a1b7838ae0] Running
	I0731 21:50:59.352254 1155156 system_pods.go:89] "kube-apiserver-auto-605794" [d62422d0-6d43-43bc-82a6-45e62667105c] Running
	I0731 21:50:59.352277 1155156 system_pods.go:89] "kube-controller-manager-auto-605794" [174c1453-530d-4a92-9123-e0bafa233d43] Running
	I0731 21:50:59.352281 1155156 system_pods.go:89] "kube-proxy-25t5t" [ec022361-54aa-4a92-8acc-e8ce49c93fa2] Running
	I0731 21:50:59.352285 1155156 system_pods.go:89] "kube-scheduler-auto-605794" [ac18cd5e-2dad-49d5-b04e-df9e1f56dbd9] Running
	I0731 21:50:59.352289 1155156 system_pods.go:89] "storage-provisioner" [10fda435-8833-4a76-b112-3e3c3a000866] Running
	I0731 21:50:59.352296 1155156 system_pods.go:126] duration metric: took 201.910573ms to wait for k8s-apps to be running ...
	I0731 21:50:59.352303 1155156 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:50:59.352352 1155156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:50:59.370072 1155156 system_svc.go:56] duration metric: took 17.752755ms WaitForService to wait for kubelet
	I0731 21:50:59.370132 1155156 kubeadm.go:582] duration metric: took 6.139337386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:50:59.370162 1155156 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:50:59.551016 1155156 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:50:59.551076 1155156 node_conditions.go:123] node cpu capacity is 2
	I0731 21:50:59.551088 1155156 node_conditions.go:105] duration metric: took 180.92126ms to run NodePressure ...
	I0731 21:50:59.551103 1155156 start.go:241] waiting for startup goroutines ...
	I0731 21:50:59.551113 1155156 start.go:246] waiting for cluster config update ...
	I0731 21:50:59.551127 1155156 start.go:255] writing updated cluster config ...
	I0731 21:50:59.551443 1155156 ssh_runner.go:195] Run: rm -f paused
	I0731 21:50:59.605538 1155156 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:50:59.607316 1155156 out.go:177] * Done! kubectl is now configured to use "auto-605794" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.934312096Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655741457467,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T21:34:15.433703669Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:462d3b8efcfa334fbd404f88d8bbe805ff769e8daaf14728c60c8dfcc85619fd,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-7fxm2,Uid:2651e359-a15a-4958-a9bb-9080efbd6943,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655608901551,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-7fxm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2651e359-a15a-4958-a9bb-9080efbd694
3,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:34:15.300802392Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-h54vh,Uid:fd09813a-38fd-4620-8b89-67dbf0ba4173,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655424951666,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd09813a-38fd-4620-8b89-67dbf0ba4173,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:34:15.114705517Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-h6wll,Uid:16a3c2ad-faff-49cf
-8a56-d36681b771c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655419236188,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:34:15.106765835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&PodSandboxMetadata{Name:kube-proxy-j6jnw,Uid:8e59f643-6f37-4f5e-a862-89a39008af1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461655222103716,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:34:14.915683399Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-563652,Uid:f017abb8a101cece6e5cd8ce971a8ba6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461636093016352,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f017abb8a101cece6e5cd8ce971a8ba6,kubernetes.io/config.seen: 2024-07-31T21:33:55.635401348Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-563652,Uid:464315ef6a164806305dfc2e66305983,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461636077989854,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 464315ef6a164806305dfc2e66305983,kubernetes.io/config.seen: 2024-07-31T21:33:55.635400261Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-563652,Uid:162d0f3c9978bf8fc52c13a660e67af3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461636076708358,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.203:8443,kubernetes.io/config.hash: 162d0f3c9978bf8fc52c13a660e67af3,kubernetes.io/config.seen: 2024-07-31T21:33:55.635398594Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-563652,Uid:5d2dcaf67c9531013a82e502bb415293,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722461636076069265,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.5
0.203:2379,kubernetes.io/config.hash: 5d2dcaf67c9531013a82e502bb415293,kubernetes.io/config.seen: 2024-07-31T21:33:55.635394478Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6a0ef49f-1dcf-445d-905a-1aa1b7181e4e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.935285036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4c0eacf-c5ca-495a-ab8f-d7c879f13494 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.935350107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4c0eacf-c5ca-495a-ab8f-d7c879f13494 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.935563947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb,PodSandboxId:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461656136800214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6b5594,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17,PodSandboxId:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655952606832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,},Annotations:map[string]string{io.kubernetes.container.hash: 504bf332,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2,PodSandboxId:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655932295585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
d09813a-38fd-4620-8b89-67dbf0ba4173,},Annotations:map[string]string{io.kubernetes.container.hash: ab505c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e,PodSandboxId:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722461655516927416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,},Annotations:map[string]string{io.kubernetes.container.hash: b7c71f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4,PodSandboxId:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461636282419435,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5,PodSandboxId:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461636305925164,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,},Annotations:map[string]string{io.kubernetes.container.hash: e4f6e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56,PodSandboxId:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461636261475801,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968,PodSandboxId:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461636216550094,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,},Annotations:map[string]string{io.kubernetes.container.hash: 62b0085b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4c0eacf-c5ca-495a-ab8f-d7c879f13494 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.945991207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6aeb6fe-f395-4583-baa9-a6310c7830b3 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.946066802Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6aeb6fe-f395-4583-baa9-a6310c7830b3 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.947418871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcceb14b-9e6a-4ac5-bcf7-35e9b3d66ec6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.947867084Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462668947844446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcceb14b-9e6a-4ac5-bcf7-35e9b3d66ec6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.948434407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7613de0-d669-4bd1-b4eb-04a5b080ce2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.948506315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7613de0-d669-4bd1-b4eb-04a5b080ce2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.948734356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb,PodSandboxId:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461656136800214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6b5594,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17,PodSandboxId:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655952606832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,},Annotations:map[string]string{io.kubernetes.container.hash: 504bf332,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2,PodSandboxId:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655932295585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
d09813a-38fd-4620-8b89-67dbf0ba4173,},Annotations:map[string]string{io.kubernetes.container.hash: ab505c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e,PodSandboxId:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722461655516927416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,},Annotations:map[string]string{io.kubernetes.container.hash: b7c71f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4,PodSandboxId:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461636282419435,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5,PodSandboxId:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461636305925164,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,},Annotations:map[string]string{io.kubernetes.container.hash: e4f6e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56,PodSandboxId:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461636261475801,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968,PodSandboxId:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461636216550094,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,},Annotations:map[string]string{io.kubernetes.container.hash: 62b0085b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7613de0-d669-4bd1-b4eb-04a5b080ce2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.990165043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=571798ad-a810-4b88-996b-0867f7e08532 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.990254612Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=571798ad-a810-4b88-996b-0867f7e08532 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.991317799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11a96de5-993e-4356-8563-47667c699eda name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.992020016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462668991997582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11a96de5-993e-4356-8563-47667c699eda name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.992691523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6b7f7ce-09b1-48f9-ae9b-290a00e95a9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.992787936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6b7f7ce-09b1-48f9-ae9b-290a00e95a9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:08 embed-certs-563652 crio[723]: time="2024-07-31 21:51:08.993026150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb,PodSandboxId:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461656136800214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6b5594,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17,PodSandboxId:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655952606832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,},Annotations:map[string]string{io.kubernetes.container.hash: 504bf332,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2,PodSandboxId:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655932295585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
d09813a-38fd-4620-8b89-67dbf0ba4173,},Annotations:map[string]string{io.kubernetes.container.hash: ab505c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e,PodSandboxId:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722461655516927416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,},Annotations:map[string]string{io.kubernetes.container.hash: b7c71f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4,PodSandboxId:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461636282419435,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5,PodSandboxId:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461636305925164,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,},Annotations:map[string]string{io.kubernetes.container.hash: e4f6e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56,PodSandboxId:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461636261475801,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968,PodSandboxId:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461636216550094,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,},Annotations:map[string]string{io.kubernetes.container.hash: 62b0085b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6b7f7ce-09b1-48f9-ae9b-290a00e95a9b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:09 embed-certs-563652 crio[723]: time="2024-07-31 21:51:09.030620889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce089814-28ed-4335-aea7-57ccd84fdbce name=/runtime.v1.RuntimeService/Version
	Jul 31 21:51:09 embed-certs-563652 crio[723]: time="2024-07-31 21:51:09.030837602Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce089814-28ed-4335-aea7-57ccd84fdbce name=/runtime.v1.RuntimeService/Version
	Jul 31 21:51:09 embed-certs-563652 crio[723]: time="2024-07-31 21:51:09.032314408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b1bbe5a-c255-4a26-ad92-6aaa133d7d7b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:51:09 embed-certs-563652 crio[723]: time="2024-07-31 21:51:09.033647178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462669033618494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b1bbe5a-c255-4a26-ad92-6aaa133d7d7b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:51:09 embed-certs-563652 crio[723]: time="2024-07-31 21:51:09.034311303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f3af024-d349-45ee-bc7a-4519d8f4cb23 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:09 embed-certs-563652 crio[723]: time="2024-07-31 21:51:09.034379113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f3af024-d349-45ee-bc7a-4519d8f4cb23 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:51:09 embed-certs-563652 crio[723]: time="2024-07-31 21:51:09.034599216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb,PodSandboxId:6757bc5fa5813b273b23011381873a26c67e4eccd992b893d07d01983afe460f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461656136800214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f1c311-1547-42ea-b1ad-cefdf7ffeba0,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6b5594,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17,PodSandboxId:094e2162eee087e5c0cc2840c4232733643a048fa8a6a08bb3ad5d4443020449,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655952606832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h6wll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a3c2ad-faff-49cf-8a56-d36681b771c2,},Annotations:map[string]string{io.kubernetes.container.hash: 504bf332,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2,PodSandboxId:8d6a86a376deaba0463d2259ae067eee198d6c954b20c73feb4363b0e4d099bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461655932295585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h54vh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
d09813a-38fd-4620-8b89-67dbf0ba4173,},Annotations:map[string]string{io.kubernetes.container.hash: ab505c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e,PodSandboxId:f19f1de752add270e6085bb7197c5c958111ada7b9e9a5e503bf0b3c9e7ce792,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722461655516927416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6jnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e59f643-6f37-4f5e-a862-89a39008af1a,},Annotations:map[string]string{io.kubernetes.container.hash: b7c71f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4,PodSandboxId:ee26674ae029247d1368105c2a3705ace878bcb987455924f6fa77151a3c4635,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722461636282419435,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f017abb8a101cece6e5cd8ce971a8ba6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5,PodSandboxId:08d21c82dbad9cea210bcdc3ac6fa76d4d77e70bfc625f0e2415c44b0d92422f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722461636305925164,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2dcaf67c9531013a82e502bb415293,},Annotations:map[string]string{io.kubernetes.container.hash: e4f6e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56,PodSandboxId:1f6693c12f0286214200bdf6d5b311ca6400673a10cce7b1e56b5ca2a6f6a6a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722461636261475801,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 464315ef6a164806305dfc2e66305983,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968,PodSandboxId:1bacefb178d9a31610e0d2a51a91e868a159a030aba39429dded6a745d6fa5e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722461636216550094,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-563652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 162d0f3c9978bf8fc52c13a660e67af3,},Annotations:map[string]string{io.kubernetes.container.hash: 62b0085b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f3af024-d349-45ee-bc7a-4519d8f4cb23 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	929e8e0237b9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   6757bc5fa5813       storage-provisioner
	1917ee6e3264c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   094e2162eee08       coredns-7db6d8ff4d-h6wll
	ef7d9a266760a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   8d6a86a376dea       coredns-7db6d8ff4d-h54vh
	32428cc9d80fa       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   f19f1de752add       kube-proxy-j6jnw
	6144e374d65e4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 minutes ago      Running             etcd                      2                   08d21c82dbad9       etcd-embed-certs-563652
	8eaeda92ae420       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   17 minutes ago      Running             kube-scheduler            2                   ee26674ae0292       kube-scheduler-embed-certs-563652
	854524558aeaa       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   17 minutes ago      Running             kube-controller-manager   2                   1f6693c12f028       kube-controller-manager-embed-certs-563652
	f06ea2f769524       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   17 minutes ago      Running             kube-apiserver            2                   1bacefb178d9a       kube-apiserver-embed-certs-563652
	
	
	==> coredns [1917ee6e3264c6bcd5d0e40705a77171ba2f504dd6ba9a11ac473488f29b5b17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ef7d9a266760ab861538ab0c0308d5ed2a228f91ec8b4673e159f6d34a41bdb2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-563652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-563652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=embed-certs-563652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_34_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:33:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-563652
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:51:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:49:38 +0000   Wed, 31 Jul 2024 21:33:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:49:38 +0000   Wed, 31 Jul 2024 21:33:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:49:38 +0000   Wed, 31 Jul 2024 21:33:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:49:38 +0000   Wed, 31 Jul 2024 21:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.203
	  Hostname:    embed-certs-563652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6b096e405244ed7a0b3856840b914ed
	  System UUID:                c6b096e4-0524-4ed7-a0b3-856840b914ed
	  Boot ID:                    7dd9ff6b-65f4-4768-9371-90df345781ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-h54vh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-h6wll                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-563652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-embed-certs-563652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-embed-certs-563652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-j6jnw                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-563652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-569cc877fc-7fxm2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node embed-certs-563652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node embed-certs-563652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node embed-certs-563652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-563652 event: Registered Node embed-certs-563652 in Controller
	
	
	==> dmesg <==
	[  +0.047815] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036939] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.717730] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.922139] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.536373] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.803409] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.056483] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066342] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.166225] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.156031] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.317520] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[Jul31 21:29] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.067806] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.310897] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +6.110801] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.919884] kauditd_printk_skb: 89 callbacks suppressed
	[Jul31 21:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.798741] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	[  +4.455930] kauditd_printk_skb: 57 callbacks suppressed
	[Jul31 21:34] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[ +12.906073] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +0.113917] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 21:35] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [6144e374d65e475ad77784fa8ff3ff367271b55f7d1ed03645330602078e12b5] <==
	{"level":"info","ts":"2024-07-31T21:48:50.848998Z","caller":"traceutil/trace.go:171","msg":"trace[1095021209] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1154; }","duration":"297.585244ms","start":"2024-07-31T21:48:50.551404Z","end":"2024-07-31T21:48:50.84899Z","steps":["trace[1095021209] 'agreement among raft nodes before linearized reading'  (duration: 297.473943ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:48:50.849127Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T21:48:50.454858Z","time spent":"394.101416ms","remote":"127.0.0.1:52068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.203\" mod_revision:1146 > success:<request_put:<key:\"/registry/masterleases/192.168.50.203\" value_size:67 lease:8659455658728873408 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.203\" > >"}
	{"level":"warn","ts":"2024-07-31T21:48:51.241821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.86302ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8659455658728873414 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1152 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T21:48:51.241912Z","caller":"traceutil/trace.go:171","msg":"trace[23259208] linearizableReadLoop","detail":"{readStateIndex:1350; appliedIndex:1349; }","duration":"387.698571ms","start":"2024-07-31T21:48:50.854201Z","end":"2024-07-31T21:48:51.241899Z","steps":["trace[23259208] 'read index received'  (duration: 125.686928ms)","trace[23259208] 'applied index is now lower than readState.Index'  (duration: 262.010389ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T21:48:51.242122Z","caller":"traceutil/trace.go:171","msg":"trace[1314823693] transaction","detail":"{read_only:false; response_revision:1155; number_of_response:1; }","duration":"388.181235ms","start":"2024-07-31T21:48:50.853925Z","end":"2024-07-31T21:48:51.242106Z","steps":["trace[1314823693] 'process raft request'  (duration: 125.958762ms)","trace[1314823693] 'compare'  (duration: 261.709411ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T21:48:51.242215Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T21:48:50.853909Z","time spent":"388.263639ms","remote":"127.0.0.1:52216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1152 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-31T21:48:51.242449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"388.242271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"info","ts":"2024-07-31T21:48:51.242504Z","caller":"traceutil/trace.go:171","msg":"trace[353991634] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1155; }","duration":"388.312457ms","start":"2024-07-31T21:48:50.854182Z","end":"2024-07-31T21:48:51.242494Z","steps":["trace[353991634] 'agreement among raft nodes before linearized reading'  (duration: 388.177719ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:48:51.242535Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T21:48:50.854175Z","time spent":"388.351635ms","remote":"127.0.0.1:52308","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":504,"request content":"key:\"/registry/endpointslices/default/kubernetes\" "}
	{"level":"warn","ts":"2024-07-31T21:48:51.242718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.647169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:48:51.242769Z","caller":"traceutil/trace.go:171","msg":"trace[1483023373] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1155; }","duration":"188.731986ms","start":"2024-07-31T21:48:51.054025Z","end":"2024-07-31T21:48:51.242757Z","steps":["trace[1483023373] 'agreement among raft nodes before linearized reading'  (duration: 188.622416ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:48:57.185819Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":915}
	{"level":"info","ts":"2024-07-31T21:48:57.189331Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":915,"took":"3.185739ms","hash":2623533808,"current-db-size-bytes":2228224,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-31T21:48:57.189388Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2623533808,"revision":915,"compact-revision":672}
	{"level":"info","ts":"2024-07-31T21:49:45.986517Z","caller":"traceutil/trace.go:171","msg":"trace[56382262] transaction","detail":"{read_only:false; response_revision:1200; number_of_response:1; }","duration":"450.279051ms","start":"2024-07-31T21:49:45.536224Z","end":"2024-07-31T21:49:45.986503Z","steps":["trace[56382262] 'process raft request'  (duration: 450.178444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:49:45.986752Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T21:49:45.536206Z","time spent":"450.481159ms","remote":"127.0.0.1:52216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1199 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-31T21:49:46.243476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.162769ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8659455658728873696 > lease_revoke:<id:782c910ab6c7de90>","response":"size:28"}
	{"level":"info","ts":"2024-07-31T21:49:46.243646Z","caller":"traceutil/trace.go:171","msg":"trace[318762335] linearizableReadLoop","detail":"{readStateIndex:1408; appliedIndex:1407; }","duration":"195.459709ms","start":"2024-07-31T21:49:46.048169Z","end":"2024-07-31T21:49:46.243629Z","steps":["trace[318762335] 'read index received'  (duration: 24.725µs)","trace[318762335] 'applied index is now lower than readState.Index'  (duration: 195.433385ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T21:49:46.243981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.795229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:49:46.244041Z","caller":"traceutil/trace.go:171","msg":"trace[1168921812] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1200; }","duration":"195.886902ms","start":"2024-07-31T21:49:46.048144Z","end":"2024-07-31T21:49:46.244031Z","steps":["trace[1168921812] 'agreement among raft nodes before linearized reading'  (duration: 195.7635ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:49:46.244157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.855845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:49:46.24423Z","caller":"traceutil/trace.go:171","msg":"trace[1137588970] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1200; }","duration":"194.934426ms","start":"2024-07-31T21:49:46.049283Z","end":"2024-07-31T21:49:46.244217Z","steps":["trace[1137588970] 'agreement among raft nodes before linearized reading'  (duration: 194.830361ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:50:30.571234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.310933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.203\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-31T21:50:30.571322Z","caller":"traceutil/trace.go:171","msg":"trace[1565085790] range","detail":"{range_begin:/registry/masterleases/192.168.50.203; range_end:; response_count:1; response_revision:1237; }","duration":"134.425136ms","start":"2024-07-31T21:50:30.436875Z","end":"2024-07-31T21:50:30.5713Z","steps":["trace[1565085790] 'range keys from in-memory index tree'  (duration: 134.170532ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:50:55.972777Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.201623ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8659455658728874030 > lease_revoke:<id:782c910ab6c7dfe3>","response":"size:28"}
	
	
	==> kernel <==
	 21:51:09 up 22 min,  0 users,  load average: 0.14, 0.20, 0.14
	Linux embed-certs-563652 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f06ea2f7695243bd8a528b117cd1d95c67a87bcb79a603973974747bae900968] <==
	I0731 21:44:59.678587       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:46:59.676441       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:46:59.676576       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:46:59.676605       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:46:59.679526       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:46:59.679704       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:46:59.679737       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:48:58.681806       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:48:58.681977       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 21:48:59.683191       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:48:59.683330       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:48:59.683361       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:48:59.683452       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:48:59.683481       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:48:59.684706       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:49:59.684432       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:49:59.684540       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:49:59.684555       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:49:59.685757       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:49:59.685806       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:49:59.685817       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [854524558aeaa5648a3af77f9f054d1b2518038404a2314acf3e0ae8c12e3b56] <==
	I0731 21:45:14.755914       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:45:44.249943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:45:44.763439       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:46:14.254022       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:46:14.770197       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:46:44.258573       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:46:44.777696       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:47:14.263866       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:47:14.786436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:47:44.270191       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:47:44.792756       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:48:14.275071       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:48:14.799213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:48:44.279964       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:48:44.806810       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:49:14.284274       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:49:14.814037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:49:44.289466       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:49:44.822725       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:50:09.567818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="265.985µs"
	E0731 21:50:14.293477       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:50:14.829368       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:50:23.573042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="75.927µs"
	E0731 21:50:44.297976       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:50:44.837268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [32428cc9d80fabb05978b303f6ec40c0184070991d40ae9b7d4fd4557eb3710e] <==
	I0731 21:34:15.948121       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:34:15.964353       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.203"]
	I0731 21:34:16.154746       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:34:16.154790       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:34:16.154812       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:34:16.160132       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:34:16.160769       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:34:16.161390       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:34:16.162806       1 config.go:192] "Starting service config controller"
	I0731 21:34:16.163070       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:34:16.163137       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:34:16.163160       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:34:16.164054       1 config.go:319] "Starting node config controller"
	I0731 21:34:16.164578       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:34:16.265306       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:34:16.265427       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:34:16.265462       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8eaeda92ae420fb06b757cdbe6c7dddbbc160fa7f42295f739e3e8f38a8c71c4] <==
	W0731 21:33:58.758409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:58.758440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:58.758706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 21:33:58.758733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 21:33:58.760943       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 21:33:58.761025       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:33:59.618779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 21:33:59.618886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 21:33:59.709233       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 21:33:59.709515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 21:33:59.726635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:59.726782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:59.733737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 21:33:59.734240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 21:33:59.749527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 21:33:59.750085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 21:33:59.796390       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 21:33:59.796505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 21:33:59.877049       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 21:33:59.877106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 21:33:59.913559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 21:33:59.913609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 21:33:59.925308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 21:33:59.925386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0731 21:34:00.431558       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:49:01 embed-certs-563652 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:49:01 embed-certs-563652 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:49:03 embed-certs-563652 kubelet[3846]: E0731 21:49:03.550001    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:49:18 embed-certs-563652 kubelet[3846]: E0731 21:49:18.551058    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:49:32 embed-certs-563652 kubelet[3846]: E0731 21:49:32.550235    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:49:46 embed-certs-563652 kubelet[3846]: E0731 21:49:46.550596    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:49:58 embed-certs-563652 kubelet[3846]: E0731 21:49:58.572224    3846 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 21:49:58 embed-certs-563652 kubelet[3846]: E0731 21:49:58.572280    3846 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 31 21:49:58 embed-certs-563652 kubelet[3846]: E0731 21:49:58.572439    3846 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-562zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recur
siveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:fals
e,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-7fxm2_kube-system(2651e359-a15a-4958-a9bb-9080efbd6943): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 31 21:49:58 embed-certs-563652 kubelet[3846]: E0731 21:49:58.572468    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:50:01 embed-certs-563652 kubelet[3846]: E0731 21:50:01.567094    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:50:01 embed-certs-563652 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:50:01 embed-certs-563652 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:50:01 embed-certs-563652 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:50:01 embed-certs-563652 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:50:09 embed-certs-563652 kubelet[3846]: E0731 21:50:09.551568    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:50:23 embed-certs-563652 kubelet[3846]: E0731 21:50:23.551092    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:50:37 embed-certs-563652 kubelet[3846]: E0731 21:50:37.550853    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:50:51 embed-certs-563652 kubelet[3846]: E0731 21:50:51.550419    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	Jul 31 21:51:01 embed-certs-563652 kubelet[3846]: E0731 21:51:01.566616    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:51:01 embed-certs-563652 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:51:01 embed-certs-563652 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:51:01 embed-certs-563652 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:51:01 embed-certs-563652 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:51:04 embed-certs-563652 kubelet[3846]: E0731 21:51:04.549964    3846 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-7fxm2" podUID="2651e359-a15a-4958-a9bb-9080efbd6943"
	
	
	==> storage-provisioner [929e8e0237b9cd224b5e6ff430e4e84ee7c8a693d20cd9f5fc1aca42676cefdb] <==
	I0731 21:34:16.329349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:34:16.349022       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:34:16.349073       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:34:16.361778       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:34:16.362404       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-563652_cd362733-b55b-4185-ae02-c8225507b2bd!
	I0731 21:34:16.363453       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27f8747c-2d1c-42ed-bb17-b255ca34a55a", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-563652_cd362733-b55b-4185-ae02-c8225507b2bd became leader
	I0731 21:34:16.463108       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-563652_cd362733-b55b-4185-ae02-c8225507b2bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-563652 -n embed-certs-563652
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-563652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-7fxm2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-563652 describe pod metrics-server-569cc877fc-7fxm2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-563652 describe pod metrics-server-569cc877fc-7fxm2: exit status 1 (75.932088ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-7fxm2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-563652 describe pod metrics-server-569cc877fc-7fxm2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (468.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (364.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-018891 -n no-preload-018891
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:49:55.44817739 +0000 UTC m=+6036.606608505
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-018891 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-018891 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.085µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-018891 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-018891 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-018891 logs -n 25: (1.340928543s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275462        | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-318420 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | disable-driver-mounts-318420                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:24 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-018891                  | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-018891 --memory=2200                     | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:34 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-755535  | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC | 31 Jul 24 21:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-563652                 | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275462             | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-755535       | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC | 31 Jul 24 21:34 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:48 UTC |
	| start   | -p newest-cni-308216 --memory=2200 --alsologtostderr   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:48 UTC | 31 Jul 24 21:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-308216             | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-308216                                   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-308216                  | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-308216 --memory=2200 --alsologtostderr   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-308216 image list                           | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-308216                                   | newest-cni-308216            | jenkins | v1.33.1 | 31 Jul 24 21:49 UTC | 31 Jul 24 21:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:49:19
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:49:19.440434 1154242 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:49:19.440711 1154242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:49:19.440722 1154242 out.go:304] Setting ErrFile to fd 2...
	I0731 21:49:19.440727 1154242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:49:19.440915 1154242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:49:19.441482 1154242 out.go:298] Setting JSON to false
	I0731 21:49:19.442613 1154242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19910,"bootTime":1722442649,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:49:19.442712 1154242 start.go:139] virtualization: kvm guest
	I0731 21:49:19.444936 1154242 out.go:177] * [newest-cni-308216] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:49:19.446133 1154242 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:49:19.446197 1154242 notify.go:220] Checking for updates...
	I0731 21:49:19.448219 1154242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:49:19.449507 1154242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:49:19.450727 1154242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:49:19.451892 1154242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:49:19.452955 1154242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:49:19.454468 1154242 config.go:182] Loaded profile config "newest-cni-308216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:49:19.454880 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:19.454933 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:19.470424 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0731 21:49:19.470964 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:19.471686 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:19.471726 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:19.472077 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:19.472301 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:19.472608 1154242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:49:19.472914 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:19.472955 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:19.488254 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0731 21:49:19.488819 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:19.489311 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:19.489347 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:19.489711 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:19.489936 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:19.527032 1154242 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:49:19.528159 1154242 start.go:297] selected driver: kvm2
	I0731 21:49:19.528178 1154242 start.go:901] validating driver "kvm2" against &{Name:newest-cni-308216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-308216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.22 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_
pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:49:19.528335 1154242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:49:19.529125 1154242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:49:19.529233 1154242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:49:19.545345 1154242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:49:19.545823 1154242 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 21:49:19.545895 1154242 cni.go:84] Creating CNI manager for ""
	I0731 21:49:19.545911 1154242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:49:19.545970 1154242 start.go:340] cluster config:
	{Name:newest-cni-308216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-308216 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.22 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:49:19.546103 1154242 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:49:19.547783 1154242 out.go:177] * Starting "newest-cni-308216" primary control-plane node in "newest-cni-308216" cluster
	I0731 21:49:19.548867 1154242 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:49:19.548915 1154242 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:49:19.548926 1154242 cache.go:56] Caching tarball of preloaded images
	I0731 21:49:19.549028 1154242 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:49:19.549041 1154242 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 21:49:19.549160 1154242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/config.json ...
	I0731 21:49:19.549397 1154242 start.go:360] acquireMachinesLock for newest-cni-308216: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:49:19.549454 1154242 start.go:364] duration metric: took 32.998µs to acquireMachinesLock for "newest-cni-308216"
	I0731 21:49:19.549475 1154242 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:49:19.549485 1154242 fix.go:54] fixHost starting: 
	I0731 21:49:19.549829 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:19.549871 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:19.565990 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0731 21:49:19.566436 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:19.566939 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:19.566962 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:19.567348 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:19.567573 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:19.567752 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetState
	I0731 21:49:19.569589 1154242 fix.go:112] recreateIfNeeded on newest-cni-308216: state=Stopped err=<nil>
	I0731 21:49:19.569621 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	W0731 21:49:19.569778 1154242 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:49:19.572231 1154242 out.go:177] * Restarting existing kvm2 VM for "newest-cni-308216" ...
	I0731 21:49:19.573351 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Start
	I0731 21:49:19.573754 1154242 main.go:141] libmachine: (newest-cni-308216) Ensuring networks are active...
	I0731 21:49:19.574489 1154242 main.go:141] libmachine: (newest-cni-308216) Ensuring network default is active
	I0731 21:49:19.574820 1154242 main.go:141] libmachine: (newest-cni-308216) Ensuring network mk-newest-cni-308216 is active
	I0731 21:49:19.575254 1154242 main.go:141] libmachine: (newest-cni-308216) Getting domain xml...
	I0731 21:49:19.576221 1154242 main.go:141] libmachine: (newest-cni-308216) Creating domain...
	I0731 21:49:20.868299 1154242 main.go:141] libmachine: (newest-cni-308216) Waiting to get IP...
	I0731 21:49:20.869319 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:20.869801 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:20.869892 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:20.869767 1154277 retry.go:31] will retry after 292.156539ms: waiting for machine to come up
	I0731 21:49:21.163502 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:21.164030 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:21.164063 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:21.163981 1154277 retry.go:31] will retry after 336.352532ms: waiting for machine to come up
	I0731 21:49:21.502326 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:21.502845 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:21.502896 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:21.502793 1154277 retry.go:31] will retry after 450.229689ms: waiting for machine to come up
	I0731 21:49:21.954509 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:21.955033 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:21.955084 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:21.954966 1154277 retry.go:31] will retry after 550.39984ms: waiting for machine to come up
	I0731 21:49:22.506626 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:22.507163 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:22.507190 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:22.507081 1154277 retry.go:31] will retry after 673.127402ms: waiting for machine to come up
	I0731 21:49:23.181477 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:23.181958 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:23.181989 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:23.181909 1154277 retry.go:31] will retry after 643.755317ms: waiting for machine to come up
	I0731 21:49:23.827873 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:23.828354 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:23.828376 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:23.828328 1154277 retry.go:31] will retry after 953.732198ms: waiting for machine to come up
	I0731 21:49:24.783272 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:24.783825 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:24.783855 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:24.783742 1154277 retry.go:31] will retry after 1.472476726s: waiting for machine to come up
	I0731 21:49:26.257569 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:26.258000 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:26.258030 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:26.257950 1154277 retry.go:31] will retry after 1.183579023s: waiting for machine to come up
	I0731 21:49:27.443274 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:27.443672 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:27.443702 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:27.443629 1154277 retry.go:31] will retry after 1.441194959s: waiting for machine to come up
	I0731 21:49:28.887463 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:28.887962 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:28.887992 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:28.887927 1154277 retry.go:31] will retry after 2.564108985s: waiting for machine to come up
	I0731 21:49:31.453141 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:31.453586 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:31.453609 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:31.453545 1154277 retry.go:31] will retry after 2.182872995s: waiting for machine to come up
	I0731 21:49:33.638833 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:33.639308 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | unable to find current IP address of domain newest-cni-308216 in network mk-newest-cni-308216
	I0731 21:49:33.639348 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | I0731 21:49:33.639279 1154277 retry.go:31] will retry after 3.331396364s: waiting for machine to come up
	I0731 21:49:36.973006 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:36.973638 1154242 main.go:141] libmachine: (newest-cni-308216) Found IP for machine: 192.168.72.22
	I0731 21:49:36.973665 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has current primary IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:36.973672 1154242 main.go:141] libmachine: (newest-cni-308216) Reserving static IP address...
	I0731 21:49:36.973998 1154242 main.go:141] libmachine: (newest-cni-308216) Reserved static IP address: 192.168.72.22
	I0731 21:49:36.974031 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "newest-cni-308216", mac: "52:54:00:85:8d:96", ip: "192.168.72.22"} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:36.974040 1154242 main.go:141] libmachine: (newest-cni-308216) Waiting for SSH to be available...
	I0731 21:49:36.974060 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | skip adding static IP to network mk-newest-cni-308216 - found existing host DHCP lease matching {name: "newest-cni-308216", mac: "52:54:00:85:8d:96", ip: "192.168.72.22"}
	I0731 21:49:36.974069 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Getting to WaitForSSH function...
	I0731 21:49:36.976502 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:36.976859 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:36.976886 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:36.977047 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Using SSH client type: external
	I0731 21:49:36.977082 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa (-rw-------)
	I0731 21:49:36.977111 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:49:36.977132 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | About to run SSH command:
	I0731 21:49:36.977168 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | exit 0
	I0731 21:49:37.100193 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | SSH cmd err, output: <nil>: 
	I0731 21:49:37.100588 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetConfigRaw
	I0731 21:49:37.101233 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetIP
	I0731 21:49:37.103932 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.104389 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:37.104424 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.104684 1154242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/config.json ...
	I0731 21:49:37.104892 1154242 machine.go:94] provisionDockerMachine start ...
	I0731 21:49:37.104913 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:37.105134 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:37.107727 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.108138 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:37.108171 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.108377 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:37.108587 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.108765 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.108942 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:37.109118 1154242 main.go:141] libmachine: Using SSH client type: native
	I0731 21:49:37.109341 1154242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I0731 21:49:37.109354 1154242 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:49:37.216410 1154242 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:49:37.216441 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetMachineName
	I0731 21:49:37.216736 1154242 buildroot.go:166] provisioning hostname "newest-cni-308216"
	I0731 21:49:37.216764 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetMachineName
	I0731 21:49:37.216973 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:37.219829 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.220255 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:37.220300 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.220513 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:37.220740 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.220890 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.221014 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:37.221168 1154242 main.go:141] libmachine: Using SSH client type: native
	I0731 21:49:37.221367 1154242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I0731 21:49:37.221382 1154242 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-308216 && echo "newest-cni-308216" | sudo tee /etc/hostname
	I0731 21:49:37.341302 1154242 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-308216
	
	I0731 21:49:37.341332 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:37.344287 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.344672 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:37.344723 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.344901 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:37.345119 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.345262 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.345422 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:37.345594 1154242 main.go:141] libmachine: Using SSH client type: native
	I0731 21:49:37.345754 1154242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I0731 21:49:37.345771 1154242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-308216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-308216/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-308216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:49:37.457595 1154242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:49:37.457632 1154242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:49:37.457665 1154242 buildroot.go:174] setting up certificates
	I0731 21:49:37.457677 1154242 provision.go:84] configureAuth start
	I0731 21:49:37.457689 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetMachineName
	I0731 21:49:37.458001 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetIP
	I0731 21:49:37.461024 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.461404 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:37.461432 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.461568 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:37.464047 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.464366 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:37.464404 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.464560 1154242 provision.go:143] copyHostCerts
	I0731 21:49:37.464618 1154242 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:49:37.464628 1154242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:49:37.464692 1154242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:49:37.464782 1154242 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:49:37.464792 1154242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:49:37.464816 1154242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:49:37.464866 1154242 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:49:37.464873 1154242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:49:37.464894 1154242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:49:37.464942 1154242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.newest-cni-308216 san=[127.0.0.1 192.168.72.22 localhost minikube newest-cni-308216]
	I0731 21:49:37.738830 1154242 provision.go:177] copyRemoteCerts
	I0731 21:49:37.738898 1154242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:49:37.738933 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:37.741833 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.742174 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:37.742207 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.742372 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:37.742632 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.742814 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:37.742991 1154242 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa Username:docker}
	I0731 21:49:37.826860 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:49:37.850909 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:49:37.875525 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:49:37.900329 1154242 provision.go:87] duration metric: took 442.634674ms to configureAuth
	I0731 21:49:37.900368 1154242 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:49:37.900625 1154242 config.go:182] Loaded profile config "newest-cni-308216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:49:37.900727 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:37.903741 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.904129 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:37.904163 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:37.904352 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:37.904584 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.904783 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:37.904940 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:37.905148 1154242 main.go:141] libmachine: Using SSH client type: native
	I0731 21:49:37.905322 1154242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I0731 21:49:37.905343 1154242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:49:38.166150 1154242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:49:38.166187 1154242 machine.go:97] duration metric: took 1.061280571s to provisionDockerMachine
	I0731 21:49:38.166203 1154242 start.go:293] postStartSetup for "newest-cni-308216" (driver="kvm2")
	I0731 21:49:38.166219 1154242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:49:38.166243 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:38.166708 1154242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:49:38.166740 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:38.169308 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.169677 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:38.169709 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.169830 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:38.170090 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:38.170271 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:38.170436 1154242 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa Username:docker}
	I0731 21:49:38.254803 1154242 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:49:38.258883 1154242 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:49:38.258911 1154242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:49:38.258998 1154242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:49:38.259095 1154242 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:49:38.259214 1154242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:49:38.268671 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:49:38.294492 1154242 start.go:296] duration metric: took 128.271004ms for postStartSetup
	I0731 21:49:38.294546 1154242 fix.go:56] duration metric: took 18.74506245s for fixHost
	I0731 21:49:38.294577 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:38.297466 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.297824 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:38.297855 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.298070 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:38.298295 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:38.298442 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:38.298636 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:38.298802 1154242 main.go:141] libmachine: Using SSH client type: native
	I0731 21:49:38.299000 1154242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.22 22 <nil> <nil>}
	I0731 21:49:38.299013 1154242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:49:38.404794 1154242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722462578.370084518
	
	I0731 21:49:38.404827 1154242 fix.go:216] guest clock: 1722462578.370084518
	I0731 21:49:38.404841 1154242 fix.go:229] Guest: 2024-07-31 21:49:38.370084518 +0000 UTC Remote: 2024-07-31 21:49:38.294552568 +0000 UTC m=+18.890886646 (delta=75.53195ms)
	I0731 21:49:38.404887 1154242 fix.go:200] guest clock delta is within tolerance: 75.53195ms
	I0731 21:49:38.404901 1154242 start.go:83] releasing machines lock for "newest-cni-308216", held for 18.855433148s
	I0731 21:49:38.404938 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:38.405245 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetIP
	I0731 21:49:38.408136 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.408506 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:38.408537 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.408696 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:38.409204 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:38.409388 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:38.409465 1154242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:49:38.409510 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:38.409659 1154242 ssh_runner.go:195] Run: cat /version.json
	I0731 21:49:38.409689 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:38.412452 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.412632 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.412887 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:38.412933 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.413060 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:38.413151 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:38.413183 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:38.413267 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:38.413353 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:38.413435 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:38.413538 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:38.413619 1154242 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa Username:docker}
	I0731 21:49:38.413728 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:38.413893 1154242 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa Username:docker}
	I0731 21:49:38.492980 1154242 ssh_runner.go:195] Run: systemctl --version
	I0731 21:49:38.513376 1154242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:49:38.657890 1154242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:49:38.663612 1154242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:49:38.663693 1154242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:49:38.680446 1154242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:49:38.680487 1154242 start.go:495] detecting cgroup driver to use...
	I0731 21:49:38.680560 1154242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:49:38.698676 1154242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:49:38.713273 1154242 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:49:38.713339 1154242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:49:38.727893 1154242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:49:38.742263 1154242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:49:38.869290 1154242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:49:39.008148 1154242 docker.go:233] disabling docker service ...
	I0731 21:49:39.008219 1154242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:49:39.022413 1154242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:49:39.035069 1154242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:49:39.183700 1154242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:49:39.319809 1154242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:49:39.333539 1154242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:49:39.352805 1154242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:49:39.352900 1154242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:49:39.363548 1154242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:49:39.363623 1154242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:49:39.373520 1154242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:49:39.383807 1154242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:49:39.393890 1154242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:49:39.404036 1154242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:49:39.413968 1154242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:49:39.432586 1154242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:49:39.444799 1154242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:49:39.454901 1154242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:49:39.454960 1154242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:49:39.468164 1154242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:49:39.477884 1154242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:49:39.606237 1154242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:49:39.741178 1154242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:49:39.741251 1154242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:49:39.747129 1154242 start.go:563] Will wait 60s for crictl version
	I0731 21:49:39.747222 1154242 ssh_runner.go:195] Run: which crictl
	I0731 21:49:39.750836 1154242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:49:39.794225 1154242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:49:39.794344 1154242 ssh_runner.go:195] Run: crio --version
	I0731 21:49:39.825211 1154242 ssh_runner.go:195] Run: crio --version
	I0731 21:49:39.855446 1154242 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:49:39.856764 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetIP
	I0731 21:49:39.859518 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:39.859912 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:39.859935 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:39.860236 1154242 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:49:39.864307 1154242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:49:39.878572 1154242 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0731 21:49:39.879661 1154242 kubeadm.go:883] updating cluster {Name:newest-cni-308216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-308216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.22 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:49:39.879816 1154242 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:49:39.879898 1154242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:49:39.917746 1154242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:49:39.917828 1154242 ssh_runner.go:195] Run: which lz4
	I0731 21:49:39.922192 1154242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:49:39.926684 1154242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:49:39.926720 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0731 21:49:41.200723 1154242 crio.go:462] duration metric: took 1.278572229s to copy over tarball
	I0731 21:49:41.200819 1154242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:49:43.305433 1154242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10456756s)
	I0731 21:49:43.305464 1154242 crio.go:469] duration metric: took 2.104707011s to extract the tarball
	I0731 21:49:43.305472 1154242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:49:43.342285 1154242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:49:43.389821 1154242 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:49:43.389852 1154242 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:49:43.389862 1154242 kubeadm.go:934] updating node { 192.168.72.22 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:49:43.390020 1154242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-308216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-308216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:49:43.390112 1154242 ssh_runner.go:195] Run: crio config
	I0731 21:49:43.442396 1154242 cni.go:84] Creating CNI manager for ""
	I0731 21:49:43.442420 1154242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:49:43.442432 1154242 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0731 21:49:43.442465 1154242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.22 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-308216 NodeName:newest-cni-308216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:49:43.442630 1154242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-308216"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:49:43.442694 1154242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:49:43.452599 1154242 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:49:43.452680 1154242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:49:43.461872 1154242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
	I0731 21:49:43.478706 1154242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:49:43.494783 1154242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0731 21:49:43.512948 1154242 ssh_runner.go:195] Run: grep 192.168.72.22	control-plane.minikube.internal$ /etc/hosts
	I0731 21:49:43.516746 1154242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:49:43.528383 1154242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:49:43.655953 1154242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:49:43.673666 1154242 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216 for IP: 192.168.72.22
	I0731 21:49:43.673699 1154242 certs.go:194] generating shared ca certs ...
	I0731 21:49:43.673719 1154242 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:49:43.673905 1154242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:49:43.673956 1154242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:49:43.673970 1154242 certs.go:256] generating profile certs ...
	I0731 21:49:43.674071 1154242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/client.key
	I0731 21:49:43.674151 1154242 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/apiserver.key.be2d1209
	I0731 21:49:43.674204 1154242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/proxy-client.key
	I0731 21:49:43.674354 1154242 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:49:43.674395 1154242 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:49:43.674411 1154242 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:49:43.674461 1154242 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:49:43.674497 1154242 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:49:43.674533 1154242 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:49:43.674599 1154242 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:49:43.675433 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:49:43.729642 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:49:43.761918 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:49:43.794364 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:49:43.840567 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:49:43.865350 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:49:43.892399 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:49:43.918273 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/newest-cni-308216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:49:43.942558 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:49:43.966424 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:49:43.991776 1154242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:49:44.016761 1154242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:49:44.034060 1154242 ssh_runner.go:195] Run: openssl version
	I0731 21:49:44.039837 1154242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:49:44.051399 1154242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:49:44.056256 1154242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:49:44.056335 1154242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:49:44.062117 1154242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:49:44.072959 1154242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:49:44.083847 1154242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:49:44.088297 1154242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:49:44.088367 1154242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:49:44.093965 1154242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:49:44.105786 1154242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:49:44.116735 1154242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:49:44.121510 1154242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:49:44.121579 1154242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:49:44.127460 1154242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:49:44.138065 1154242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:49:44.142629 1154242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:49:44.148811 1154242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:49:44.155874 1154242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:49:44.162220 1154242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:49:44.168293 1154242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:49:44.174630 1154242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:49:44.180662 1154242 kubeadm.go:392] StartCluster: {Name:newest-cni-308216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-308216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.22 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:49:44.180786 1154242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:49:44.180905 1154242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:49:44.216594 1154242 cri.go:89] found id: ""
	I0731 21:49:44.216677 1154242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:49:44.226785 1154242 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:49:44.226811 1154242 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:49:44.226869 1154242 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:49:44.236682 1154242 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:49:44.237683 1154242 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-308216" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:49:44.238336 1154242 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-308216" cluster setting kubeconfig missing "newest-cni-308216" context setting]
	I0731 21:49:44.239221 1154242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:49:44.241038 1154242 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:49:44.251241 1154242 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.22
	I0731 21:49:44.251286 1154242 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:49:44.251305 1154242 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:49:44.251395 1154242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:49:44.293363 1154242 cri.go:89] found id: ""
	I0731 21:49:44.293453 1154242 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:49:44.309680 1154242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:49:44.319155 1154242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:49:44.319181 1154242 kubeadm.go:157] found existing configuration files:
	
	I0731 21:49:44.319294 1154242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:49:44.328511 1154242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:49:44.328588 1154242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:49:44.337869 1154242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:49:44.347215 1154242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:49:44.347320 1154242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:49:44.357415 1154242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:49:44.366811 1154242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:49:44.366881 1154242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:49:44.376235 1154242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:49:44.384826 1154242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:49:44.384930 1154242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:49:44.394879 1154242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:49:44.404555 1154242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:49:44.505922 1154242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:49:45.730333 1154242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.224367682s)
	I0731 21:49:45.730373 1154242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:49:46.198729 1154242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:49:46.268863 1154242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:49:46.359805 1154242 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:49:46.359942 1154242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:49:46.860268 1154242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:49:47.360771 1154242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:49:47.374899 1154242 api_server.go:72] duration metric: took 1.015092827s to wait for apiserver process to appear ...
	I0731 21:49:47.374936 1154242 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:49:47.374963 1154242 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8443/healthz ...
	I0731 21:49:47.375403 1154242 api_server.go:269] stopped: https://192.168.72.22:8443/healthz: Get "https://192.168.72.22:8443/healthz": dial tcp 192.168.72.22:8443: connect: connection refused
	I0731 21:49:47.875045 1154242 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8443/healthz ...
	I0731 21:49:49.933949 1154242 api_server.go:279] https://192.168.72.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:49:49.933986 1154242 api_server.go:103] status: https://192.168.72.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:49:49.934003 1154242 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8443/healthz ...
	I0731 21:49:49.981098 1154242 api_server.go:279] https://192.168.72.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:49:49.981131 1154242 api_server.go:103] status: https://192.168.72.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:49:50.375298 1154242 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8443/healthz ...
	I0731 21:49:50.380779 1154242 api_server.go:279] https://192.168.72.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:49:50.380826 1154242 api_server.go:103] status: https://192.168.72.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:49:50.875334 1154242 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8443/healthz ...
	I0731 21:49:50.879913 1154242 api_server.go:279] https://192.168.72.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:49:50.879946 1154242 api_server.go:103] status: https://192.168.72.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:49:51.375464 1154242 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8443/healthz ...
	I0731 21:49:51.379992 1154242 api_server.go:279] https://192.168.72.22:8443/healthz returned 200:
	ok
	I0731 21:49:51.386689 1154242 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:49:51.386717 1154242 api_server.go:131] duration metric: took 4.011774748s to wait for apiserver health ...
	I0731 21:49:51.386726 1154242 cni.go:84] Creating CNI manager for ""
	I0731 21:49:51.386733 1154242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:49:51.388789 1154242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:49:51.390226 1154242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:49:51.401360 1154242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:49:51.422419 1154242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:49:51.433527 1154242 system_pods.go:59] 8 kube-system pods found
	I0731 21:49:51.433568 1154242 system_pods.go:61] "coredns-5cfdc65f69-kc4dg" [65bdec34-8ed7-4992-b9c4-ad3acf3d1f79] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:49:51.433579 1154242 system_pods.go:61] "etcd-newest-cni-308216" [e2178a76-665b-4150-8217-68f234703d2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:49:51.433587 1154242 system_pods.go:61] "kube-apiserver-newest-cni-308216" [4768afcb-d853-4c74-a152-b0ded158d66d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:49:51.433593 1154242 system_pods.go:61] "kube-controller-manager-newest-cni-308216" [568274c6-44eb-48ba-9c42-b43f96e0331b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:49:51.433600 1154242 system_pods.go:61] "kube-proxy-2c4qz" [37950b3f-ff86-4a39-bbb0-25e77c496caf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:49:51.433605 1154242 system_pods.go:61] "kube-scheduler-newest-cni-308216" [6907a9ab-71fd-4103-a31c-d8196e6cc136] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:49:51.433610 1154242 system_pods.go:61] "metrics-server-78fcd8795b-jqlvm" [e3d60f4c-ec12-4c0c-82f1-a409d7b463d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:49:51.433616 1154242 system_pods.go:61] "storage-provisioner" [517e9d0d-1e09-4d5a-b266-1754357eb573] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:49:51.433626 1154242 system_pods.go:74] duration metric: took 11.176461ms to wait for pod list to return data ...
	I0731 21:49:51.433634 1154242 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:49:51.437930 1154242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:49:51.437962 1154242 node_conditions.go:123] node cpu capacity is 2
	I0731 21:49:51.437977 1154242 node_conditions.go:105] duration metric: took 4.33494ms to run NodePressure ...
	I0731 21:49:51.438003 1154242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:49:51.754891 1154242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:49:51.766524 1154242 ops.go:34] apiserver oom_adj: -16
	I0731 21:49:51.766553 1154242 kubeadm.go:597] duration metric: took 7.539733217s to restartPrimaryControlPlane
	I0731 21:49:51.766563 1154242 kubeadm.go:394] duration metric: took 7.585911942s to StartCluster
	I0731 21:49:51.766600 1154242 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:49:51.766697 1154242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:49:51.768426 1154242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:49:51.768667 1154242 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.22 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:49:51.768740 1154242 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:49:51.768834 1154242 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-308216"
	I0731 21:49:51.768851 1154242 addons.go:69] Setting default-storageclass=true in profile "newest-cni-308216"
	I0731 21:49:51.768872 1154242 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-308216"
	W0731 21:49:51.768881 1154242 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:49:51.768891 1154242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-308216"
	I0731 21:49:51.768878 1154242 addons.go:69] Setting metrics-server=true in profile "newest-cni-308216"
	I0731 21:49:51.768889 1154242 addons.go:69] Setting dashboard=true in profile "newest-cni-308216"
	I0731 21:49:51.768914 1154242 host.go:66] Checking if "newest-cni-308216" exists ...
	I0731 21:49:51.768934 1154242 addons.go:234] Setting addon dashboard=true in "newest-cni-308216"
	I0731 21:49:51.768934 1154242 addons.go:234] Setting addon metrics-server=true in "newest-cni-308216"
	W0731 21:49:51.768945 1154242 addons.go:243] addon metrics-server should already be in state true
	W0731 21:49:51.768948 1154242 addons.go:243] addon dashboard should already be in state true
	I0731 21:49:51.768968 1154242 config.go:182] Loaded profile config "newest-cni-308216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:49:51.768990 1154242 host.go:66] Checking if "newest-cni-308216" exists ...
	I0731 21:49:51.769005 1154242 host.go:66] Checking if "newest-cni-308216" exists ...
	I0731 21:49:51.769226 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.769279 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.769380 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.769423 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.769441 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.769479 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.769551 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.769621 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.770600 1154242 out.go:177] * Verifying Kubernetes components...
	I0731 21:49:51.771911 1154242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:49:51.786423 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0731 21:49:51.786451 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45135
	I0731 21:49:51.787038 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.787051 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.787123 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34935
	I0731 21:49:51.787510 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.787622 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.787644 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.787703 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.787721 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.788005 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.788030 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.788056 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.788065 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.788260 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetState
	I0731 21:49:51.788805 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.788849 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.788887 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0731 21:49:51.788904 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.789409 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.789546 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.789582 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.789851 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.789867 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.790304 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.790959 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.790994 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.792294 1154242 addons.go:234] Setting addon default-storageclass=true in "newest-cni-308216"
	W0731 21:49:51.792318 1154242 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:49:51.792349 1154242 host.go:66] Checking if "newest-cni-308216" exists ...
	I0731 21:49:51.792709 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.792754 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.811220 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34065
	I0731 21:49:51.811600 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I0731 21:49:51.811625 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43331
	I0731 21:49:51.811880 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.812021 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.812260 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.812463 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.812487 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.812535 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.812552 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.812884 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.812905 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.812972 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.813016 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.813139 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetState
	I0731 21:49:51.813195 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetState
	I0731 21:49:51.813689 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.813916 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetState
	I0731 21:49:51.815535 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:51.815675 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:51.816293 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:51.816566 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I0731 21:49:51.817420 1154242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:49:51.817451 1154242 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:49:51.817513 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.818210 1154242 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0731 21:49:51.818523 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.818549 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.818961 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.819249 1154242 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:49:51.819270 1154242 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:49:51.819293 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:51.819342 1154242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:49:51.819392 1154242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:49:51.819411 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:51.819543 1154242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:49:51.819584 1154242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:49:51.820534 1154242 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0731 21:49:51.823193 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:51.823594 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:51.823637 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:51.823653 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:51.823722 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:51.823931 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:51.823995 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:51.824011 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:51.824160 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:51.824209 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:51.824352 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:51.824458 1154242 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa Username:docker}
	I0731 21:49:51.824487 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:51.824575 1154242 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa Username:docker}
	I0731 21:49:51.828171 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0731 21:49:51.828193 1154242 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0731 21:49:51.828215 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:51.831690 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:51.832151 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:51.832188 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:51.832459 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:51.832736 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:51.832932 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:51.833087 1154242 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa Username:docker}
	I0731 21:49:51.841141 1154242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0731 21:49:51.841699 1154242 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:49:51.842311 1154242 main.go:141] libmachine: Using API Version  1
	I0731 21:49:51.842340 1154242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:49:51.842871 1154242 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:49:51.843109 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetState
	I0731 21:49:51.844954 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .DriverName
	I0731 21:49:51.845201 1154242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:49:51.845231 1154242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:49:51.845255 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHHostname
	I0731 21:49:51.848726 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:51.849203 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8d:96", ip: ""} in network mk-newest-cni-308216: {Iface:virbr3 ExpiryTime:2024-07-31 22:49:30 +0000 UTC Type:0 Mac:52:54:00:85:8d:96 Iaid: IPaddr:192.168.72.22 Prefix:24 Hostname:newest-cni-308216 Clientid:01:52:54:00:85:8d:96}
	I0731 21:49:51.849228 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | domain newest-cni-308216 has defined IP address 192.168.72.22 and MAC address 52:54:00:85:8d:96 in network mk-newest-cni-308216
	I0731 21:49:51.849355 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHPort
	I0731 21:49:51.849573 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHKeyPath
	I0731 21:49:51.849759 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .GetSSHUsername
	I0731 21:49:51.850132 1154242 sshutil.go:53] new ssh client: &{IP:192.168.72.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/newest-cni-308216/id_rsa Username:docker}
	I0731 21:49:51.977684 1154242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:49:52.002372 1154242 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:49:52.002514 1154242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:49:52.025112 1154242 api_server.go:72] duration metric: took 256.408121ms to wait for apiserver process to appear ...
	I0731 21:49:52.025150 1154242 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:49:52.025182 1154242 api_server.go:253] Checking apiserver healthz at https://192.168.72.22:8443/healthz ...
	I0731 21:49:52.034436 1154242 api_server.go:279] https://192.168.72.22:8443/healthz returned 200:
	ok
	I0731 21:49:52.036201 1154242 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:49:52.036231 1154242 api_server.go:131] duration metric: took 11.071904ms to wait for apiserver health ...
	I0731 21:49:52.036240 1154242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:49:52.044139 1154242 system_pods.go:59] 8 kube-system pods found
	I0731 21:49:52.044178 1154242 system_pods.go:61] "coredns-5cfdc65f69-kc4dg" [65bdec34-8ed7-4992-b9c4-ad3acf3d1f79] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:49:52.044190 1154242 system_pods.go:61] "etcd-newest-cni-308216" [e2178a76-665b-4150-8217-68f234703d2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:49:52.044209 1154242 system_pods.go:61] "kube-apiserver-newest-cni-308216" [4768afcb-d853-4c74-a152-b0ded158d66d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:49:52.044220 1154242 system_pods.go:61] "kube-controller-manager-newest-cni-308216" [568274c6-44eb-48ba-9c42-b43f96e0331b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:49:52.044227 1154242 system_pods.go:61] "kube-proxy-2c4qz" [37950b3f-ff86-4a39-bbb0-25e77c496caf] Running
	I0731 21:49:52.044238 1154242 system_pods.go:61] "kube-scheduler-newest-cni-308216" [6907a9ab-71fd-4103-a31c-d8196e6cc136] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:49:52.044249 1154242 system_pods.go:61] "metrics-server-78fcd8795b-jqlvm" [e3d60f4c-ec12-4c0c-82f1-a409d7b463d8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:49:52.044260 1154242 system_pods.go:61] "storage-provisioner" [517e9d0d-1e09-4d5a-b266-1754357eb573] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:49:52.044274 1154242 system_pods.go:74] duration metric: took 8.02335ms to wait for pod list to return data ...
	I0731 21:49:52.044285 1154242 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:49:52.048251 1154242 default_sa.go:45] found service account: "default"
	I0731 21:49:52.048284 1154242 default_sa.go:55] duration metric: took 3.987175ms for default service account to be created ...
	I0731 21:49:52.048299 1154242 kubeadm.go:582] duration metric: took 279.605837ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 21:49:52.048321 1154242 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:49:52.051740 1154242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:49:52.051765 1154242 node_conditions.go:123] node cpu capacity is 2
	I0731 21:49:52.051780 1154242 node_conditions.go:105] duration metric: took 3.452341ms to run NodePressure ...
	I0731 21:49:52.051794 1154242 start.go:241] waiting for startup goroutines ...
	I0731 21:49:52.090825 1154242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:49:52.090863 1154242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:49:52.092003 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0731 21:49:52.092026 1154242 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0731 21:49:52.133144 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0731 21:49:52.133178 1154242 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0731 21:49:52.136317 1154242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:49:52.136344 1154242 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:49:52.163829 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0731 21:49:52.163865 1154242 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0731 21:49:52.181641 1154242 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:49:52.181681 1154242 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:49:52.188375 1154242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:49:52.210439 1154242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:49:52.223248 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0731 21:49:52.223286 1154242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0731 21:49:52.228194 1154242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:49:52.292733 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0731 21:49:52.292761 1154242 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0731 21:49:52.347914 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0731 21:49:52.347949 1154242 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0731 21:49:52.415805 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0731 21:49:52.415841 1154242 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0731 21:49:52.471963 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0731 21:49:52.472003 1154242 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0731 21:49:52.589358 1154242 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 21:49:52.589386 1154242 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0731 21:49:52.637796 1154242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 21:49:53.710898 1154242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.522482688s)
	I0731 21:49:53.710952 1154242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.500458766s)
	I0731 21:49:53.710975 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:53.710999 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:53.711004 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:53.711127 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:53.711516 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Closing plugin on server side
	I0731 21:49:53.711537 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:53.711553 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Closing plugin on server side
	I0731 21:49:53.711560 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:53.711589 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:53.711604 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:53.711608 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:53.711616 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:53.711620 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:53.711628 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:53.711872 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:53.711891 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:53.711870 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:53.711905 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:53.721299 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:53.721325 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:53.721784 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Closing plugin on server side
	I0731 21:49:53.721827 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:53.721840 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:53.828952 1154242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.600692468s)
	I0731 21:49:53.829023 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:53.829048 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:53.829338 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:53.829354 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:53.829373 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:53.829386 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:53.829702 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:53.829718 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:53.829730 1154242 addons.go:475] Verifying addon metrics-server=true in "newest-cni-308216"
	I0731 21:49:53.829773 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Closing plugin on server side
	I0731 21:49:54.015068 1154242 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.377219891s)
	I0731 21:49:54.015145 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:54.015168 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:54.015629 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Closing plugin on server side
	I0731 21:49:54.015633 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:54.015660 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:54.015671 1154242 main.go:141] libmachine: Making call to close driver server
	I0731 21:49:54.015678 1154242 main.go:141] libmachine: (newest-cni-308216) Calling .Close
	I0731 21:49:54.015945 1154242 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:49:54.016034 1154242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:49:54.016016 1154242 main.go:141] libmachine: (newest-cni-308216) DBG | Closing plugin on server side
	I0731 21:49:54.017889 1154242 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-308216 addons enable metrics-server
	
	I0731 21:49:54.019430 1154242 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0731 21:49:54.020655 1154242 addons.go:510] duration metric: took 2.251917041s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0731 21:49:54.020705 1154242 start.go:246] waiting for cluster config update ...
	I0731 21:49:54.020725 1154242 start.go:255] writing updated cluster config ...
	I0731 21:49:54.021106 1154242 ssh_runner.go:195] Run: rm -f paused
	I0731 21:49:54.076870 1154242 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:49:54.078562 1154242 out.go:177] * Done! kubectl is now configured to use "newest-cni-308216" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.175853623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462596175826867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fcf2150-ae52-4d3c-8e98-851fd45f6d89 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.176507286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7dc11bc-ea51-4cb0-9d8a-6086966ade56 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.176593183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7dc11bc-ea51-4cb0-9d8a-6086966ade56 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.176795750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6da8e27e2fa414b0ec1ec07b849a6b9bd3f21d8d1bea8f30782dbe5b75d8f96e,PodSandboxId:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461436522766919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88,PodSandboxId:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461434913684377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461419851608434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca,PodSandboxId:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722461419141635958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba
4f-32bc746c18ec,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461419133390409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b2
35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6,PodSandboxId:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722461415424779797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3,PodSandboxId:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722461415428780244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618,PodSandboxId:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722461415443704741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396,PodSandboxId:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722461415380473936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7dc11bc-ea51-4cb0-9d8a-6086966ade56 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.216825951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67ad09a2-2ecf-4c44-b2ff-17de985cfc38 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.216957715Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67ad09a2-2ecf-4c44-b2ff-17de985cfc38 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.218091816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe48fee9-771a-4138-ad74-531933488b0a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.218823743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462596218786695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe48fee9-771a-4138-ad74-531933488b0a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.219445524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da4e548c-54ae-4a83-b4ec-a33e446e93e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.219554993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da4e548c-54ae-4a83-b4ec-a33e446e93e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.219986874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6da8e27e2fa414b0ec1ec07b849a6b9bd3f21d8d1bea8f30782dbe5b75d8f96e,PodSandboxId:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461436522766919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88,PodSandboxId:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461434913684377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461419851608434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca,PodSandboxId:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722461419141635958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba
4f-32bc746c18ec,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461419133390409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b2
35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6,PodSandboxId:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722461415424779797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3,PodSandboxId:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722461415428780244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618,PodSandboxId:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722461415443704741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396,PodSandboxId:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722461415380473936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da4e548c-54ae-4a83-b4ec-a33e446e93e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.263399934Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f4dbe86-0f55-4dd1-9298-a03bbef9f4c3 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.263508790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f4dbe86-0f55-4dd1-9298-a03bbef9f4c3 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.264980727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e266769d-f44e-413a-8321-6a394434925a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.265562782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462596265532124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e266769d-f44e-413a-8321-6a394434925a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.266516184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72e9b0dd-5e33-4825-9da4-df54d4a250c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.266605151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72e9b0dd-5e33-4825-9da4-df54d4a250c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.266870069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6da8e27e2fa414b0ec1ec07b849a6b9bd3f21d8d1bea8f30782dbe5b75d8f96e,PodSandboxId:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461436522766919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88,PodSandboxId:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461434913684377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461419851608434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca,PodSandboxId:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722461419141635958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba
4f-32bc746c18ec,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461419133390409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b2
35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6,PodSandboxId:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722461415424779797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3,PodSandboxId:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722461415428780244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618,PodSandboxId:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722461415443704741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396,PodSandboxId:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722461415380473936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72e9b0dd-5e33-4825-9da4-df54d4a250c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.309991643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=331405b6-a078-4fec-a764-aeeeaa060588 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.310121614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=331405b6-a078-4fec-a764-aeeeaa060588 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.311816496Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38fc80f7-e318-48d4-8ce8-c39d0f869933 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.312421854Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462596312386937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38fc80f7-e318-48d4-8ce8-c39d0f869933 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.313128892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ccaf3e9-2400-4bb0-bdd0-429def5cf66e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.313221917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ccaf3e9-2400-4bb0-bdd0-429def5cf66e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:49:56 no-preload-018891 crio[721]: time="2024-07-31 21:49:56.313526267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6da8e27e2fa414b0ec1ec07b849a6b9bd3f21d8d1bea8f30782dbe5b75d8f96e,PodSandboxId:b8150e18accbbd08f04407b2fd0dbdea00410e94170d7a02f3cbc0c85c87464f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722461436522766919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 67c16d33-f140-4fe1-addb-121b6e20e72b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88,PodSandboxId:658154f080370eea95400d685eecb30c8d34db0506f4519f81332ce0a952ea51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722461434913684377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9w4w4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ee0da2-837d-46d8-9615-1021a5ad28b9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722461419851608434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b235,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca,PodSandboxId:575654bf126ec4b63d9db21a7438b222d3126b5a5c0c58f0052d7aa384f8c5b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722461419141635958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2dnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6403e5-f31e-4e5a-ba
4f-32bc746c18ec,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f,PodSandboxId:984ba1c1bd42f4f3c9cc64ed0b66905261725a9a2fdcb4099451180767505576,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722461419133390409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fc2f0d-7f78-4a87-83a1-94558267b2
35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6,PodSandboxId:1120fbbd2a3893ed8fbb2b992bce43fb1a10954f9efd4b91a6ff5daf919eddeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722461415424779797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727de0fa3f6cbe53a76c06f29db5f604,},Annotations:map[string]string{io.kuber
netes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3,PodSandboxId:05f4f00f9ac91502cfa5dc6b2ecbeaff217a1c26376c20f1f4967725c1ca2f9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722461415428780244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1cef0270e9353f8805fb0506ba7f946,},Annotations:map[string]st
ring{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618,PodSandboxId:066237e6eb60485acad4d7c3155094835595991c2b5b138fb5c793e371f0a2c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722461415443704741,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4479d2ecc9e7e300e8902502640890,},Annotations:map[string]string{io.kuber
netes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396,PodSandboxId:4a7176ca61e62f6d12fa5dfbbdb7908c1f59f4eeff0bca89bb473d127b18aa2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722461415380473936,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-018891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80797f1f899f51d1cec6afc7d6cb6f43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ccaf3e9-2400-4bb0-bdd0-429def5cf66e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6da8e27e2fa41       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   b8150e18accbb       busybox
	efba76f74230d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   658154f080370       coredns-5cfdc65f69-9w4w4
	a4d6f8d417836       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       4                   984ba1c1bd42f       storage-provisioner
	1aa83cc70feca       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      19 minutes ago      Running             kube-proxy                1                   575654bf126ec       kube-proxy-x2dnn
	c579a97b62d1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       3                   984ba1c1bd42f       storage-provisioner
	e71c179bd22e9       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      19 minutes ago      Running             kube-scheduler            1                   066237e6eb604       kube-scheduler-no-preload-018891
	8d94e11c56302       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      19 minutes ago      Running             kube-controller-manager   1                   05f4f00f9ac91       kube-controller-manager-no-preload-018891
	d614beb36e5ab       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      19 minutes ago      Running             etcd                      1                   1120fbbd2a389       etcd-no-preload-018891
	a11eb6669e85e       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      19 minutes ago      Running             kube-apiserver            1                   4a7176ca61e62       kube-apiserver-no-preload-018891
	
	
	==> coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37983 - 17262 "HINFO IN 7894977547777157273.8102779924257215395. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021019267s
	
	
	==> describe nodes <==
	Name:               no-preload-018891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-018891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1
	                    minikube.k8s.io/name=no-preload-018891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_20_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:20:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-018891
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:49:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:46:05 +0000   Wed, 31 Jul 2024 21:20:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:46:05 +0000   Wed, 31 Jul 2024 21:20:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:46:05 +0000   Wed, 31 Jul 2024 21:20:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:46:05 +0000   Wed, 31 Jul 2024 21:30:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.246
	  Hostname:    no-preload-018891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad9f5af829224e0ca46f9d3d9a20647b
	  System UUID:                ad9f5af8-2922-4e0c-a46f-9d3d9a20647b
	  Boot ID:                    1d1d9902-9814-4a48-ab99-e976437c2299
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5cfdc65f69-9w4w4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-no-preload-018891                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-no-preload-018891             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-no-preload-018891    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-x2dnn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-no-preload-018891             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-78fcd8795b-c7lxw              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-018891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-018891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-018891 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-018891 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-018891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-018891 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node no-preload-018891 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node no-preload-018891 event: Registered Node no-preload-018891 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-018891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-018891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-018891 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-018891 event: Registered Node no-preload-018891 in Controller
	
	
	==> dmesg <==
	[Jul31 21:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048838] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037940] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.054239] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.944818] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.543149] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.136393] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.060563] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072215] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.185917] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.147171] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.311239] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Jul31 21:30] systemd-fstab-generator[1167]: Ignoring "noauto" option for root device
	[  +0.056877] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.903083] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +4.564100] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.492811] systemd-fstab-generator[1963]: Ignoring "noauto" option for root device
	[  +5.243864] kauditd_printk_skb: 66 callbacks suppressed
	[  +7.799549] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] <==
	{"level":"info","ts":"2024-07-31T21:30:17.19886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:30:17.198901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 received MsgPreVoteResp from c9a5eb5753c44688 at term 2"}
	{"level":"info","ts":"2024-07-31T21:30:17.198948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:30:17.19896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 received MsgVoteResp from c9a5eb5753c44688 at term 3"}
	{"level":"info","ts":"2024-07-31T21:30:17.198976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9a5eb5753c44688 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:30:17.198983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9a5eb5753c44688 elected leader c9a5eb5753c44688 at term 3"}
	{"level":"info","ts":"2024-07-31T21:30:17.209986Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c9a5eb5753c44688","local-member-attributes":"{Name:no-preload-018891 ClientURLs:[https://192.168.61.246:2379]}","request-path":"/0/members/c9a5eb5753c44688/attributes","cluster-id":"f649e0b6c01be2c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:30:17.210049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:30:17.210399Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:30:17.210455Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:30:17.210438Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:30:17.211105Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:30:17.211316Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:30:17.211991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.246:2379"}
	{"level":"info","ts":"2024-07-31T21:30:17.212198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:40:17.241063Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2024-07-31T21:40:17.249367Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":860,"took":"7.885289ms","hash":221528161,"current-db-size-bytes":2723840,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2723840,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-31T21:40:17.249468Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":221528161,"revision":860,"compact-revision":-1}
	{"level":"info","ts":"2024-07-31T21:45:17.24988Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1102}
	{"level":"info","ts":"2024-07-31T21:45:17.25299Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1102,"took":"2.766067ms","hash":1594099269,"current-db-size-bytes":2723840,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1716224,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-07-31T21:45:17.253068Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1594099269,"revision":1102,"compact-revision":860}
	{"level":"warn","ts":"2024-07-31T21:48:50.916979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.152704ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:48:50.917215Z","caller":"traceutil/trace.go:171","msg":"trace[1519765230] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1520; }","duration":"194.398964ms","start":"2024-07-31T21:48:50.722795Z","end":"2024-07-31T21:48:50.917194Z","steps":["trace[1519765230] 'range keys from in-memory index tree'  (duration: 194.093112ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:48:52.812208Z","caller":"traceutil/trace.go:171","msg":"trace[545630376] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"147.003056ms","start":"2024-07-31T21:48:52.665183Z","end":"2024-07-31T21:48:52.812186Z","steps":["trace[545630376] 'process raft request'  (duration: 146.661905ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:49:45.207074Z","caller":"traceutil/trace.go:171","msg":"trace[1003713132] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"127.636118ms","start":"2024-07-31T21:49:45.079418Z","end":"2024-07-31T21:49:45.207054Z","steps":["trace[1003713132] 'process raft request'  (duration: 127.522256ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:49:56 up 20 min,  0 users,  load average: 0.31, 0.28, 0.17
	Linux no-preload-018891 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] <==
	W0731 21:45:19.511427       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:45:19.511495       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 21:45:19.512660       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:45:19.512765       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:46:19.513825       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:46:19.513914       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0731 21:46:19.513999       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:46:19.514067       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0731 21:46:19.515157       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:46:19.515235       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:48:19.516236       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:48:19.516387       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0731 21:48:19.516457       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:48:19.516547       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0731 21:48:19.517545       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:48:19.517631       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] <==
	E0731 21:44:52.731929       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:44:53.283037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:45:22.737485       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:45:23.289703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:45:52.743136       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:45:53.298174       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:46:05.260215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-018891"
	E0731 21:46:22.749527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:46:23.312682       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:46:37.792913       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="99.034µs"
	I0731 21:46:51.794102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="66.477µs"
	E0731 21:46:52.754951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:46:53.320508       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:47:22.760976       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:47:23.327151       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:47:52.766447       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:47:53.336264       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:48:22.773334       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:48:23.344175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:48:52.781522       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:48:53.351010       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:49:22.786983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:49:23.359136       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:49:52.793853       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:49:53.369428       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 21:30:19.341735       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 21:30:19.351979       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.246"]
	E0731 21:30:19.352064       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 21:30:19.391382       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 21:30:19.391439       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:30:19.391479       1 server_linux.go:170] "Using iptables Proxier"
	I0731 21:30:19.396114       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 21:30:19.396904       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 21:30:19.397392       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:30:19.415226       1 config.go:197] "Starting service config controller"
	I0731 21:30:19.415409       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:30:19.415484       1 config.go:104] "Starting endpoint slice config controller"
	I0731 21:30:19.415514       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:30:19.420381       1 config.go:326] "Starting node config controller"
	I0731 21:30:19.420416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:30:19.515518       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:30:19.515711       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:30:19.521584       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] <==
	I0731 21:30:16.525789       1 serving.go:386] Generated self-signed cert in-memory
	W0731 21:30:18.486935       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:30:18.487007       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:30:18.487017       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:30:18.487026       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:30:18.571690       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 21:30:18.571724       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:30:18.578433       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:30:18.579437       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:30:18.579626       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:30:18.579789       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0731 21:30:18.680201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:47:14 no-preload-018891 kubelet[1295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:47:14 no-preload-018891 kubelet[1295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:47:16 no-preload-018891 kubelet[1295]: E0731 21:47:16.778220    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:47:29 no-preload-018891 kubelet[1295]: E0731 21:47:29.777660    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:47:40 no-preload-018891 kubelet[1295]: E0731 21:47:40.778016    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:47:54 no-preload-018891 kubelet[1295]: E0731 21:47:54.779238    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:48:05 no-preload-018891 kubelet[1295]: E0731 21:48:05.778924    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:48:14 no-preload-018891 kubelet[1295]: E0731 21:48:14.800438    1295 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:48:14 no-preload-018891 kubelet[1295]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:48:14 no-preload-018891 kubelet[1295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:48:14 no-preload-018891 kubelet[1295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:48:14 no-preload-018891 kubelet[1295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:48:19 no-preload-018891 kubelet[1295]: E0731 21:48:19.778316    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:48:33 no-preload-018891 kubelet[1295]: E0731 21:48:33.777497    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:48:47 no-preload-018891 kubelet[1295]: E0731 21:48:47.778110    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:48:59 no-preload-018891 kubelet[1295]: E0731 21:48:59.778187    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:49:13 no-preload-018891 kubelet[1295]: E0731 21:49:13.777623    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:49:14 no-preload-018891 kubelet[1295]: E0731 21:49:14.800193    1295 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:49:14 no-preload-018891 kubelet[1295]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:49:14 no-preload-018891 kubelet[1295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:49:14 no-preload-018891 kubelet[1295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:49:14 no-preload-018891 kubelet[1295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:49:26 no-preload-018891 kubelet[1295]: E0731 21:49:26.778688    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:49:39 no-preload-018891 kubelet[1295]: E0731 21:49:39.777631    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	Jul 31 21:49:51 no-preload-018891 kubelet[1295]: E0731 21:49:51.778465    1295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-c7lxw" podUID="6b18e5a9-5996-4650-97ea-204405ba9d89"
	
	
	==> storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] <==
	I0731 21:30:19.942627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:30:19.954044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:30:19.954095       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:30:37.358346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:30:37.358999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80815874-157f-46a3-99c5-ff3e7bda36cc", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-018891_f9f576b2-ef8c-4f4a-9658-c155db924368 became leader
	I0731 21:30:37.359219       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-018891_f9f576b2-ef8c-4f4a-9658-c155db924368!
	I0731 21:30:37.460090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-018891_f9f576b2-ef8c-4f4a-9658-c155db924368!
	
	
	==> storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] <==
	I0731 21:30:19.247812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 21:30:19.250385       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-018891 -n no-preload-018891
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-018891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-c7lxw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-018891 describe pod metrics-server-78fcd8795b-c7lxw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-018891 describe pod metrics-server-78fcd8795b-c7lxw: exit status 1 (73.938844ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-c7lxw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-018891 describe pod metrics-server-78fcd8795b-c7lxw: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (364.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (104.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
E0731 21:47:00.019103 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.107:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 2 (235.219145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-275462" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-275462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-275462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.445µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-275462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 2 (225.599204ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-275462 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-275462 logs -n 25: (1.660103897s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-238338                              | cert-expiration-238338       | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-018891             | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC | 31 Jul 24 21:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-018891                                   | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-563652            | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:22 UTC |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:22 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275462        | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-202332                           | kubernetes-upgrade-202332    | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-318420 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:23 UTC |
	|         | disable-driver-mounts-318420                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:24 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-018891                  | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-018891 --memory=2200                     | no-preload-018891            | jenkins | v1.33.1 | 31 Jul 24 21:23 UTC | 31 Jul 24 21:34 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-755535  | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC | 31 Jul 24 21:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-563652                 | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-563652                                  | embed-certs-563652           | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275462             | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC | 31 Jul 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275462                              | old-k8s-version-275462       | jenkins | v1.33.1 | 31 Jul 24 21:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-755535       | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-755535 | jenkins | v1.33.1 | 31 Jul 24 21:27 UTC | 31 Jul 24 21:34 UTC |
	|         | default-k8s-diff-port-755535                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:27:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:27:26.030260 1148013 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:27:26.030388 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030397 1148013 out.go:304] Setting ErrFile to fd 2...
	I0731 21:27:26.030401 1148013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:27:26.030608 1148013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:27:26.031249 1148013 out.go:298] Setting JSON to false
	I0731 21:27:26.032356 1148013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18597,"bootTime":1722442649,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:27:26.032418 1148013 start.go:139] virtualization: kvm guest
	I0731 21:27:26.034938 1148013 out.go:177] * [default-k8s-diff-port-755535] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:27:26.036482 1148013 notify.go:220] Checking for updates...
	I0731 21:27:26.036489 1148013 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:27:26.038147 1148013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:27:26.039588 1148013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:27:26.040948 1148013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:27:26.042283 1148013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:27:26.043447 1148013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:27:26.045210 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:27:26.045675 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.045758 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.061309 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0731 21:27:26.061780 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.062491 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.062533 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.062921 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.063189 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.063482 1148013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:27:26.063794 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:27:26.063834 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:27:26.079162 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0731 21:27:26.079645 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:27:26.080157 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:27:26.080183 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:27:26.080542 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:27:26.080745 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:27:26.118664 1148013 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:27:26.120036 1148013 start.go:297] selected driver: kvm2
	I0731 21:27:26.120101 1148013 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.120220 1148013 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:27:26.120963 1148013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.121063 1148013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:27:26.137571 1148013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:27:26.137997 1148013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:27:26.138052 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:27:26.138065 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:27:26.138143 1148013 start.go:340] cluster config:
	{Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:27:26.138260 1148013 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:27:26.140210 1148013 out.go:177] * Starting "default-k8s-diff-port-755535" primary control-plane node in "default-k8s-diff-port-755535" cluster
	I0731 21:27:26.141439 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:27:26.141487 1148013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:27:26.141498 1148013 cache.go:56] Caching tarball of preloaded images
	I0731 21:27:26.141586 1148013 preload.go:172] Found /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:27:26.141597 1148013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:27:26.141693 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:27:26.141896 1148013 start.go:360] acquireMachinesLock for default-k8s-diff-port-755535: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:27:27.008495 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:30.080584 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:36.160478 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:39.232498 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:45.312414 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:48.384471 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:54.464384 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:27:57.536420 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:03.616434 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:06.688387 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:12.768424 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:15.840395 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:21.920383 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:24.992412 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:31.072430 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:34.144440 1146656 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.246:22: connect: no route to host
	I0731 21:28:37.147856 1147232 start.go:364] duration metric: took 3m32.571011548s to acquireMachinesLock for "embed-certs-563652"
	I0731 21:28:37.147925 1147232 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:37.147931 1147232 fix.go:54] fixHost starting: 
	I0731 21:28:37.148287 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:37.148321 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:37.164497 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0731 21:28:37.164970 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:37.165488 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:28:37.165514 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:37.165980 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:37.166236 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:37.166440 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:28:37.168379 1147232 fix.go:112] recreateIfNeeded on embed-certs-563652: state=Stopped err=<nil>
	I0731 21:28:37.168407 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	W0731 21:28:37.168605 1147232 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:37.170589 1147232 out.go:177] * Restarting existing kvm2 VM for "embed-certs-563652" ...
	I0731 21:28:37.171953 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Start
	I0731 21:28:37.172181 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring networks are active...
	I0731 21:28:37.173124 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network default is active
	I0731 21:28:37.173407 1147232 main.go:141] libmachine: (embed-certs-563652) Ensuring network mk-embed-certs-563652 is active
	I0731 21:28:37.173963 1147232 main.go:141] libmachine: (embed-certs-563652) Getting domain xml...
	I0731 21:28:37.174662 1147232 main.go:141] libmachine: (embed-certs-563652) Creating domain...
	I0731 21:28:38.412401 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting to get IP...
	I0731 21:28:38.413198 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.413705 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.413848 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.413679 1148299 retry.go:31] will retry after 259.485128ms: waiting for machine to come up
	I0731 21:28:38.675408 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:38.675997 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:38.676020 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:38.675947 1148299 retry.go:31] will retry after 335.618163ms: waiting for machine to come up
	I0731 21:28:39.013788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.014375 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.014410 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.014338 1148299 retry.go:31] will retry after 367.833515ms: waiting for machine to come up
	I0731 21:28:39.383927 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.384304 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.384330 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.384282 1148299 retry.go:31] will retry after 399.641643ms: waiting for machine to come up
	I0731 21:28:37.145377 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:37.145426 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.145841 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:28:37.145876 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:28:37.146110 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:28:37.147660 1146656 machine.go:97] duration metric: took 4m34.558419201s to provisionDockerMachine
	I0731 21:28:37.147745 1146656 fix.go:56] duration metric: took 4m34.586940428s for fixHost
	I0731 21:28:37.147761 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 4m34.586994448s
	W0731 21:28:37.147782 1146656 start.go:714] error starting host: provision: host is not running
	W0731 21:28:37.147896 1146656 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 21:28:37.147905 1146656 start.go:729] Will try again in 5 seconds ...
	I0731 21:28:39.785994 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:39.786532 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:39.786564 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:39.786477 1148299 retry.go:31] will retry after 734.925372ms: waiting for machine to come up
	I0731 21:28:40.523580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:40.523946 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:40.523976 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:40.523897 1148299 retry.go:31] will retry after 588.684081ms: waiting for machine to come up
	I0731 21:28:41.113730 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:41.114237 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:41.114269 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:41.114163 1148299 retry.go:31] will retry after 937.611465ms: waiting for machine to come up
	I0731 21:28:42.053276 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:42.053607 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:42.053631 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:42.053567 1148299 retry.go:31] will retry after 1.025772158s: waiting for machine to come up
	I0731 21:28:43.081306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:43.081710 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:43.081739 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:43.081649 1148299 retry.go:31] will retry after 1.677045484s: waiting for machine to come up
	I0731 21:28:42.148804 1146656 start.go:360] acquireMachinesLock for no-preload-018891: {Name:mke8ecf618b640d6b41bac344518efaa0b5a0542 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:28:44.761328 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:44.761956 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:44.761982 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:44.761903 1148299 retry.go:31] will retry after 2.317638211s: waiting for machine to come up
	I0731 21:28:47.081357 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:47.081798 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:47.081821 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:47.081742 1148299 retry.go:31] will retry after 2.614024076s: waiting for machine to come up
	I0731 21:28:49.697308 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:49.697764 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:49.697788 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:49.697724 1148299 retry.go:31] will retry after 2.673090887s: waiting for machine to come up
	I0731 21:28:52.372166 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:52.372536 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | unable to find current IP address of domain embed-certs-563652 in network mk-embed-certs-563652
	I0731 21:28:52.372567 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | I0731 21:28:52.372480 1148299 retry.go:31] will retry after 3.507450288s: waiting for machine to come up
	I0731 21:28:57.157052 1147424 start.go:364] duration metric: took 3m42.182815583s to acquireMachinesLock for "old-k8s-version-275462"
	I0731 21:28:57.157149 1147424 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:28:57.157159 1147424 fix.go:54] fixHost starting: 
	I0731 21:28:57.157580 1147424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:28:57.157635 1147424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:28:57.177971 1147424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 21:28:57.178444 1147424 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:28:57.179070 1147424 main.go:141] libmachine: Using API Version  1
	I0731 21:28:57.179105 1147424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:28:57.179414 1147424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:28:57.179640 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:28:57.179803 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetState
	I0731 21:28:57.181518 1147424 fix.go:112] recreateIfNeeded on old-k8s-version-275462: state=Stopped err=<nil>
	I0731 21:28:57.181566 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	W0731 21:28:57.181776 1147424 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:28:57.184336 1147424 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275462" ...
	I0731 21:28:55.884290 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.884864 1147232 main.go:141] libmachine: (embed-certs-563652) Found IP for machine: 192.168.50.203
	I0731 21:28:55.884893 1147232 main.go:141] libmachine: (embed-certs-563652) Reserving static IP address...
	I0731 21:28:55.884911 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has current primary IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.885425 1147232 main.go:141] libmachine: (embed-certs-563652) Reserved static IP address: 192.168.50.203
	I0731 21:28:55.885445 1147232 main.go:141] libmachine: (embed-certs-563652) Waiting for SSH to be available...
	I0731 21:28:55.885479 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.885500 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | skip adding static IP to network mk-embed-certs-563652 - found existing host DHCP lease matching {name: "embed-certs-563652", mac: "52:54:00:f3:4d:dd", ip: "192.168.50.203"}
	I0731 21:28:55.885515 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Getting to WaitForSSH function...
	I0731 21:28:55.887696 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888052 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:55.888109 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:55.888279 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH client type: external
	I0731 21:28:55.888310 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa (-rw-------)
	I0731 21:28:55.888353 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:28:55.888371 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | About to run SSH command:
	I0731 21:28:55.888387 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | exit 0
	I0731 21:28:56.012306 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | SSH cmd err, output: <nil>: 
	I0731 21:28:56.012807 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetConfigRaw
	I0731 21:28:56.013549 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.016243 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016580 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.016629 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.016925 1147232 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/config.json ...
	I0731 21:28:56.017152 1147232 machine.go:94] provisionDockerMachine start ...
	I0731 21:28:56.017173 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.017431 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.019693 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020075 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.020124 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.020296 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.020489 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020606 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.020705 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.020835 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.021131 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.021143 1147232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:28:56.120421 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:28:56.120455 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.120874 1147232 buildroot.go:166] provisioning hostname "embed-certs-563652"
	I0731 21:28:56.120911 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.121185 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.124050 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124509 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.124548 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.124693 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.124936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125120 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.125300 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.125456 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.125645 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.125660 1147232 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-563652 && echo "embed-certs-563652" | sudo tee /etc/hostname
	I0731 21:28:56.237674 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-563652
	
	I0731 21:28:56.237709 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.240783 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.241212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.241460 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.241660 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.241850 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.242009 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.242230 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.242458 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.242479 1147232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-563652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-563652/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-563652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:28:56.353104 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:28:56.353138 1147232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:28:56.353165 1147232 buildroot.go:174] setting up certificates
	I0731 21:28:56.353180 1147232 provision.go:84] configureAuth start
	I0731 21:28:56.353193 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetMachineName
	I0731 21:28:56.353590 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:56.356346 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356736 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.356767 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.356921 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.359016 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359319 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.359364 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.359530 1147232 provision.go:143] copyHostCerts
	I0731 21:28:56.359595 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:28:56.359605 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:28:56.359674 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:28:56.359763 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:28:56.359772 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:28:56.359795 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:28:56.359858 1147232 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:28:56.359864 1147232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:28:56.359886 1147232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:28:56.359961 1147232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.embed-certs-563652 san=[127.0.0.1 192.168.50.203 embed-certs-563652 localhost minikube]
	I0731 21:28:56.517263 1147232 provision.go:177] copyRemoteCerts
	I0731 21:28:56.517324 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:28:56.517355 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.519965 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520292 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.520326 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.520523 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.520745 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.520956 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.521090 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:56.602671 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:28:56.626882 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:28:56.651212 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:28:56.674469 1147232 provision.go:87] duration metric: took 321.274463ms to configureAuth
	I0731 21:28:56.674505 1147232 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:28:56.674734 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:28:56.674830 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.677835 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678185 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.678215 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.678375 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.678563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678741 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.678898 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.679075 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:56.679259 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:56.679275 1147232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:28:56.930106 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:28:56.930136 1147232 machine.go:97] duration metric: took 912.97079ms to provisionDockerMachine
	I0731 21:28:56.930148 1147232 start.go:293] postStartSetup for "embed-certs-563652" (driver="kvm2")
	I0731 21:28:56.930159 1147232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:28:56.930177 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:56.930534 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:28:56.930563 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:56.933241 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933656 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:56.933689 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:56.933795 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:56.934062 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:56.934228 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:56.934372 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.015059 1147232 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:28:57.019339 1147232 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:28:57.019376 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:28:57.019472 1147232 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:28:57.019581 1147232 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:28:57.019680 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:28:57.029381 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:28:57.052530 1147232 start.go:296] duration metric: took 122.364505ms for postStartSetup
	I0731 21:28:57.052583 1147232 fix.go:56] duration metric: took 19.904651181s for fixHost
	I0731 21:28:57.052612 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.055423 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.055802 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.055852 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.056142 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.056343 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056494 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.056668 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.056844 1147232 main.go:141] libmachine: Using SSH client type: native
	I0731 21:28:57.057017 1147232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0731 21:28:57.057028 1147232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:28:57.156776 1147232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461337.115873615
	
	I0731 21:28:57.156816 1147232 fix.go:216] guest clock: 1722461337.115873615
	I0731 21:28:57.156847 1147232 fix.go:229] Guest: 2024-07-31 21:28:57.115873615 +0000 UTC Remote: 2024-07-31 21:28:57.05258776 +0000 UTC m=+232.627404404 (delta=63.285855ms)
	I0731 21:28:57.156883 1147232 fix.go:200] guest clock delta is within tolerance: 63.285855ms
	I0731 21:28:57.156901 1147232 start.go:83] releasing machines lock for "embed-certs-563652", held for 20.008989513s
	I0731 21:28:57.156936 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.157244 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:57.159882 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160307 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.160334 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.160545 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161086 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161266 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:28:57.161349 1147232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:28:57.161394 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.161460 1147232 ssh_runner.go:195] Run: cat /version.json
	I0731 21:28:57.161481 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:28:57.164126 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164511 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.164552 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164583 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.164719 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.164942 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165001 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:57.165022 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:57.165106 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165194 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:28:57.165277 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.165369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:28:57.165536 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:28:57.165692 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:28:57.261717 1147232 ssh_runner.go:195] Run: systemctl --version
	I0731 21:28:57.267459 1147232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:28:57.412757 1147232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:28:57.418248 1147232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:28:57.418317 1147232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:28:57.437752 1147232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:28:57.437786 1147232 start.go:495] detecting cgroup driver to use...
	I0731 21:28:57.437874 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:28:57.456832 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:28:57.472719 1147232 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:28:57.472803 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:28:57.486630 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:28:57.500635 1147232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:28:57.626291 1147232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:28:57.775374 1147232 docker.go:233] disabling docker service ...
	I0731 21:28:57.775563 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:28:57.789797 1147232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:28:57.803545 1147232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:28:57.944871 1147232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:28:58.088067 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:28:58.112885 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:28:58.133234 1147232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:28:58.133301 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.144149 1147232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:28:58.144234 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.154684 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.165572 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.176638 1147232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:28:58.187948 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.198949 1147232 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.219594 1147232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:28:58.230888 1147232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:28:58.241112 1147232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:28:58.241175 1147232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:28:58.255158 1147232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:28:58.265191 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:28:58.401923 1147232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:28:58.534900 1147232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:28:58.534980 1147232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:28:58.539618 1147232 start.go:563] Will wait 60s for crictl version
	I0731 21:28:58.539700 1147232 ssh_runner.go:195] Run: which crictl
	I0731 21:28:58.543605 1147232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:28:58.578544 1147232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:28:58.578653 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.608074 1147232 ssh_runner.go:195] Run: crio --version
	I0731 21:28:58.638975 1147232 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:28:58.640454 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetIP
	I0731 21:28:58.643630 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644168 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:28:58.644204 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:28:58.644497 1147232 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 21:28:58.648555 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:28:58.661131 1147232 kubeadm.go:883] updating cluster {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:28:58.661262 1147232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:28:58.661307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:28:58.696977 1147232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:28:58.697058 1147232 ssh_runner.go:195] Run: which lz4
	I0731 21:28:58.700913 1147232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:28:58.705097 1147232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:28:58.705135 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:28:57.185854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .Start
	I0731 21:28:57.186093 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring networks are active...
	I0731 21:28:57.186915 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network default is active
	I0731 21:28:57.187268 1147424 main.go:141] libmachine: (old-k8s-version-275462) Ensuring network mk-old-k8s-version-275462 is active
	I0731 21:28:57.187627 1147424 main.go:141] libmachine: (old-k8s-version-275462) Getting domain xml...
	I0731 21:28:57.188447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Creating domain...
	I0731 21:28:58.502711 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting to get IP...
	I0731 21:28:58.503791 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.504272 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.504341 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.504250 1148436 retry.go:31] will retry after 309.193175ms: waiting for machine to come up
	I0731 21:28:58.815172 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:58.815690 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:58.815745 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:58.815657 1148436 retry.go:31] will retry after 271.329404ms: waiting for machine to come up
	I0731 21:28:59.089281 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.089738 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.089778 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.089705 1148436 retry.go:31] will retry after 354.250517ms: waiting for machine to come up
	I0731 21:28:59.445390 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.445869 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.445895 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.445823 1148436 retry.go:31] will retry after 434.740787ms: waiting for machine to come up
	I0731 21:29:00.142120 1147232 crio.go:462] duration metric: took 1.441232682s to copy over tarball
	I0731 21:29:00.142222 1147232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:02.454101 1147232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.311834948s)
	I0731 21:29:02.454139 1147232 crio.go:469] duration metric: took 2.311975688s to extract the tarball
	I0731 21:29:02.454150 1147232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:02.493307 1147232 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:02.541225 1147232 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:02.541257 1147232 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:02.541268 1147232 kubeadm.go:934] updating node { 192.168.50.203 8443 v1.30.3 crio true true} ...
	I0731 21:29:02.541448 1147232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-563652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:02.541548 1147232 ssh_runner.go:195] Run: crio config
	I0731 21:29:02.586951 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:02.586976 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:02.586989 1147232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:02.587016 1147232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.203 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-563652 NodeName:embed-certs-563652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:02.587188 1147232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-563652"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:02.587287 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:02.598944 1147232 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:02.599041 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:02.610271 1147232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0731 21:29:02.627952 1147232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:02.644727 1147232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0731 21:29:02.661985 1147232 ssh_runner.go:195] Run: grep 192.168.50.203	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:02.665903 1147232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:02.678010 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:02.809768 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:02.826650 1147232 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652 for IP: 192.168.50.203
	I0731 21:29:02.826682 1147232 certs.go:194] generating shared ca certs ...
	I0731 21:29:02.826704 1147232 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:02.826923 1147232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:02.826988 1147232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:02.827005 1147232 certs.go:256] generating profile certs ...
	I0731 21:29:02.827126 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/client.key
	I0731 21:29:02.827208 1147232 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key.0963b177
	I0731 21:29:02.827279 1147232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key
	I0731 21:29:02.827458 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:02.827515 1147232 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:02.827533 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:02.827563 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:02.827598 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:02.827630 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:02.827690 1147232 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:02.828735 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:02.862923 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:02.907648 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:02.950647 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:02.978032 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 21:29:03.007119 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:29:03.031483 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:03.055190 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/embed-certs-563652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:03.079296 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:03.102817 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:03.126115 1147232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:03.149887 1147232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:03.167213 1147232 ssh_runner.go:195] Run: openssl version
	I0731 21:29:03.172827 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:03.183821 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188216 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.188290 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:03.193896 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:03.204706 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:03.215687 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220061 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.220148 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:03.226469 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:03.237668 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:03.248629 1147232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.252962 1147232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.253032 1147232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:03.258590 1147232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:03.269656 1147232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:03.274277 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:03.280438 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:03.286378 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:03.292717 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:03.298776 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:03.305022 1147232 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:03.311507 1147232 kubeadm.go:392] StartCluster: {Name:embed-certs-563652 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-563652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:03.311608 1147232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:03.311676 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.349359 1147232 cri.go:89] found id: ""
	I0731 21:29:03.349457 1147232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:03.359993 1147232 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:03.360015 1147232 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:03.360058 1147232 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:03.371322 1147232 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:03.372350 1147232 kubeconfig.go:125] found "embed-certs-563652" server: "https://192.168.50.203:8443"
	I0731 21:29:03.374391 1147232 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:03.386008 1147232 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.203
	I0731 21:29:03.386053 1147232 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:03.386069 1147232 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:03.386141 1147232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:03.428902 1147232 cri.go:89] found id: ""
	I0731 21:29:03.429001 1147232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:03.445950 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:03.455917 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:03.455954 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:03.456007 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:03.465688 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:03.465757 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:03.475699 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:03.485103 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:03.485179 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:03.495141 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.504430 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:03.504532 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:03.514523 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:03.524199 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:03.524280 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:03.533924 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:03.546105 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:03.656770 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:28:59.882326 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:28:59.882926 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:28:59.882959 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:28:59.882880 1148436 retry.go:31] will retry after 563.345278ms: waiting for machine to come up
	I0731 21:29:00.447702 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:00.448213 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:00.448245 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:00.448155 1148436 retry.go:31] will retry after 605.062991ms: waiting for machine to come up
	I0731 21:29:01.055120 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.055541 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.055564 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.055484 1148436 retry.go:31] will retry after 781.785142ms: waiting for machine to come up
	I0731 21:29:01.838536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:01.839123 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:01.839148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:01.839075 1148436 retry.go:31] will retry after 1.037287171s: waiting for machine to come up
	I0731 21:29:02.878421 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:02.878828 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:02.878860 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:02.878794 1148436 retry.go:31] will retry after 1.796829213s: waiting for machine to come up
	I0731 21:29:04.677338 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:04.677928 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:04.677963 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:04.677848 1148436 retry.go:31] will retry after 2.083632912s: waiting for machine to come up
	I0731 21:29:04.982138 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.325308339s)
	I0731 21:29:04.982177 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.196591 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.261920 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:05.343027 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:05.343137 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:05.844024 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.344246 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:06.360837 1147232 api_server.go:72] duration metric: took 1.017810929s to wait for apiserver process to appear ...
	I0731 21:29:06.360880 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:06.360916 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:06.361563 1147232 api_server.go:269] stopped: https://192.168.50.203:8443/healthz: Get "https://192.168.50.203:8443/healthz": dial tcp 192.168.50.203:8443: connect: connection refused
	I0731 21:29:06.861091 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.297633 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.297674 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.297691 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.335524 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:09.335568 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:09.361820 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.374624 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.374671 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:06.764436 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:06.764979 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:06.765012 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:06.764918 1148436 retry.go:31] will retry after 2.092811182s: waiting for machine to come up
	I0731 21:29:08.860056 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:08.860536 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:08.860571 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:08.860498 1148436 retry.go:31] will retry after 2.731015709s: waiting for machine to come up
	I0731 21:29:09.861443 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:09.865941 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:09.865978 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.361710 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.365984 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:10.366014 1147232 api_server.go:103] status: https://192.168.50.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:10.861702 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:29:10.866015 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:29:10.872799 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:10.872831 1147232 api_server.go:131] duration metric: took 4.511944174s to wait for apiserver health ...
	I0731 21:29:10.872842 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:29:10.872848 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:10.874719 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:10.876229 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:10.886256 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:10.903893 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:10.913974 1147232 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:10.914021 1147232 system_pods.go:61] "coredns-7db6d8ff4d-kscsg" [260d2d5f-fd44-4a0a-813b-fab424728e55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:10.914031 1147232 system_pods.go:61] "etcd-embed-certs-563652" [e278abd0-801d-4156-bcc4-8f0d35a34b2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:10.914045 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [1398c865-6871-45c2-ad93-45b629d1d3c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:10.914055 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [0fbefc31-9024-41cb-b56a-944add33a901] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:10.914066 1147232 system_pods.go:61] "kube-proxy-m4www" [cb2d9b36-d71f-4986-9fb1-547e76fd2e77] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:10.914076 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [15887051-7657-4bf6-a9ca-3d834d8eb4a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:10.914089 1147232 system_pods.go:61] "metrics-server-569cc877fc-6jkw9" [eb41d2c6-c267-486d-83eb-25e5578b1e6e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:10.914100 1147232 system_pods.go:61] "storage-provisioner" [5fc70da7-6dac-4e44-865c-495fd5fec485] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:10.914112 1147232 system_pods.go:74] duration metric: took 10.188078ms to wait for pod list to return data ...
	I0731 21:29:10.914125 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:10.917224 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:10.917258 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:10.917272 1147232 node_conditions.go:105] duration metric: took 3.140281ms to run NodePressure ...
	I0731 21:29:10.917294 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:11.176463 1147232 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180506 1147232 kubeadm.go:739] kubelet initialised
	I0731 21:29:11.180529 1147232 kubeadm.go:740] duration metric: took 4.03724ms waiting for restarted kubelet to initialise ...
	I0731 21:29:11.180540 1147232 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:11.185366 1147232 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:13.197693 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:11.594836 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:11.595339 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | unable to find current IP address of domain old-k8s-version-275462 in network mk-old-k8s-version-275462
	I0731 21:29:11.595374 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | I0731 21:29:11.595293 1148436 retry.go:31] will retry after 4.520307648s: waiting for machine to come up
	I0731 21:29:17.633145 1148013 start.go:364] duration metric: took 1m51.491197772s to acquireMachinesLock for "default-k8s-diff-port-755535"
	I0731 21:29:17.633242 1148013 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:17.633255 1148013 fix.go:54] fixHost starting: 
	I0731 21:29:17.633764 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:17.633823 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:17.654593 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0731 21:29:17.655124 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:17.655734 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:17.655770 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:17.656109 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:17.656359 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:17.656530 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:17.658542 1148013 fix.go:112] recreateIfNeeded on default-k8s-diff-port-755535: state=Stopped err=<nil>
	I0731 21:29:17.658585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	W0731 21:29:17.658784 1148013 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:17.660580 1148013 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-755535" ...
	I0731 21:29:16.120431 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120937 1147424 main.go:141] libmachine: (old-k8s-version-275462) Found IP for machine: 192.168.72.107
	I0731 21:29:16.120961 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has current primary IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.120968 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserving static IP address...
	I0731 21:29:16.121466 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.121508 1147424 main.go:141] libmachine: (old-k8s-version-275462) Reserved static IP address: 192.168.72.107
	I0731 21:29:16.121528 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | skip adding static IP to network mk-old-k8s-version-275462 - found existing host DHCP lease matching {name: "old-k8s-version-275462", mac: "52:54:00:87:e2:c6", ip: "192.168.72.107"}
	I0731 21:29:16.121561 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Getting to WaitForSSH function...
	I0731 21:29:16.121599 1147424 main.go:141] libmachine: (old-k8s-version-275462) Waiting for SSH to be available...
	I0731 21:29:16.123460 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123825 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.123849 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.123954 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH client type: external
	I0731 21:29:16.123988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa (-rw-------)
	I0731 21:29:16.124019 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:16.124034 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | About to run SSH command:
	I0731 21:29:16.124049 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | exit 0
	I0731 21:29:16.244331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:16.244741 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetConfigRaw
	I0731 21:29:16.245387 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.248072 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248502 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.248529 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.248857 1147424 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/config.json ...
	I0731 21:29:16.249132 1147424 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:16.249162 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:16.249412 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.252283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252657 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.252687 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.252864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.253096 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253286 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.253433 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.253606 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.253875 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.253895 1147424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:16.356702 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:16.356743 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357088 1147424 buildroot.go:166] provisioning hostname "old-k8s-version-275462"
	I0731 21:29:16.357116 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.357303 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.361044 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361504 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.361540 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.361801 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.362037 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362252 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.362430 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.362618 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.362866 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.362884 1147424 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275462 && echo "old-k8s-version-275462" | sudo tee /etc/hostname
	I0731 21:29:16.478590 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275462
	
	I0731 21:29:16.478635 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.481767 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482148 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.482184 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.482467 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.482716 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.482888 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.483083 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.483323 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:16.483529 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:16.483554 1147424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275462/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:16.597465 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:16.597515 1147424 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:16.597549 1147424 buildroot.go:174] setting up certificates
	I0731 21:29:16.597563 1147424 provision.go:84] configureAuth start
	I0731 21:29:16.597578 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetMachineName
	I0731 21:29:16.597901 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:16.600943 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601347 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.601388 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.601582 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.604296 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604757 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.604787 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.604950 1147424 provision.go:143] copyHostCerts
	I0731 21:29:16.605019 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:16.605037 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:16.605108 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:16.605235 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:16.605249 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:16.605285 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:16.605370 1147424 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:16.605381 1147424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:16.605407 1147424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:16.605474 1147424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275462 san=[127.0.0.1 192.168.72.107 localhost minikube old-k8s-version-275462]
	I0731 21:29:16.959571 1147424 provision.go:177] copyRemoteCerts
	I0731 21:29:16.959637 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:16.959671 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:16.962543 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.962955 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:16.962988 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:16.963253 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:16.963483 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:16.963690 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:16.963885 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.047050 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:17.072833 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 21:29:17.099214 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:17.125846 1147424 provision.go:87] duration metric: took 528.260173ms to configureAuth
	I0731 21:29:17.125892 1147424 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:17.126109 1147424 config.go:182] Loaded profile config "old-k8s-version-275462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 21:29:17.126194 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.129283 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129568 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.129602 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.129926 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.130232 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130458 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.130601 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.130820 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.131002 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.131016 1147424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:17.395537 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:17.395569 1147424 machine.go:97] duration metric: took 1.146418308s to provisionDockerMachine
	I0731 21:29:17.395581 1147424 start.go:293] postStartSetup for "old-k8s-version-275462" (driver="kvm2")
	I0731 21:29:17.395598 1147424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:17.395639 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.395987 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:17.396024 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.398916 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399233 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.399264 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.399447 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.399674 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.399854 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.400026 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.483331 1147424 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:17.487820 1147424 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:17.487856 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:17.487925 1147424 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:17.488012 1147424 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:17.488186 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:17.499484 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:17.525699 1147424 start.go:296] duration metric: took 130.099417ms for postStartSetup
	I0731 21:29:17.525756 1147424 fix.go:56] duration metric: took 20.368597161s for fixHost
	I0731 21:29:17.525785 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.529040 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529525 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.529570 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.529864 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.530095 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530310 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.530481 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.530704 1147424 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:17.530879 1147424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0731 21:29:17.530890 1147424 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:17.632991 1147424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461357.608223429
	
	I0731 21:29:17.633011 1147424 fix.go:216] guest clock: 1722461357.608223429
	I0731 21:29:17.633018 1147424 fix.go:229] Guest: 2024-07-31 21:29:17.608223429 +0000 UTC Remote: 2024-07-31 21:29:17.525761122 +0000 UTC m=+242.704537445 (delta=82.462307ms)
	I0731 21:29:17.633040 1147424 fix.go:200] guest clock delta is within tolerance: 82.462307ms
	I0731 21:29:17.633045 1147424 start.go:83] releasing machines lock for "old-k8s-version-275462", held for 20.475925282s
	I0731 21:29:17.633069 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.633360 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:17.636188 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636565 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.636598 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.636792 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637346 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637569 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .DriverName
	I0731 21:29:17.637674 1147424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:17.637721 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.637831 1147424 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:17.637861 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHHostname
	I0731 21:29:17.640574 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640772 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.640966 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.640996 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641174 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641297 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:17.641331 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:17.641371 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641511 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHPort
	I0731 21:29:17.641564 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641680 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHKeyPath
	I0731 21:29:17.641846 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetSSHUsername
	I0731 21:29:17.641886 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.642184 1147424 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/old-k8s-version-275462/id_rsa Username:docker}
	I0731 21:29:17.716822 1147424 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:17.741404 1147424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:17.892700 1147424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:17.899143 1147424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:17.899252 1147424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:17.915997 1147424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:17.916032 1147424 start.go:495] detecting cgroup driver to use...
	I0731 21:29:17.916133 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:17.933847 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:17.948471 1147424 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:17.948565 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:17.963294 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:17.978417 1147424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:18.100521 1147424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:18.243022 1147424 docker.go:233] disabling docker service ...
	I0731 21:29:18.243104 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:18.258762 1147424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:18.272012 1147424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:18.421137 1147424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:18.564600 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:18.581019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:18.601426 1147424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 21:29:18.601504 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.617312 1147424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:18.617400 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.631697 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.642487 1147424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:18.654548 1147424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:18.666338 1147424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:18.676326 1147424 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:18.676406 1147424 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:18.690225 1147424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:18.702315 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:18.836795 1147424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:18.977840 1147424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:18.977930 1147424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:18.984979 1147424 start.go:563] Will wait 60s for crictl version
	I0731 21:29:18.985059 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:18.989654 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:19.033602 1147424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:19.033701 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.061583 1147424 ssh_runner.go:195] Run: crio --version
	I0731 21:29:19.093228 1147424 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 21:29:15.692077 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:18.191423 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:19.094804 1147424 main.go:141] libmachine: (old-k8s-version-275462) Calling .GetIP
	I0731 21:29:19.098122 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.098620 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e2:c6", ip: ""} in network mk-old-k8s-version-275462: {Iface:virbr3 ExpiryTime:2024-07-31 22:29:08 +0000 UTC Type:0 Mac:52:54:00:87:e2:c6 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:old-k8s-version-275462 Clientid:01:52:54:00:87:e2:c6}
	I0731 21:29:19.098648 1147424 main.go:141] libmachine: (old-k8s-version-275462) DBG | domain old-k8s-version-275462 has defined IP address 192.168.72.107 and MAC address 52:54:00:87:e2:c6 in network mk-old-k8s-version-275462
	I0731 21:29:19.099016 1147424 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:19.103372 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:19.117035 1147424 kubeadm.go:883] updating cluster {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:19.117205 1147424 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:29:19.117275 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:19.163252 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:19.163343 1147424 ssh_runner.go:195] Run: which lz4
	I0731 21:29:19.168173 1147424 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:19.172513 1147424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:19.172576 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 21:29:17.662009 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Start
	I0731 21:29:17.662245 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring networks are active...
	I0731 21:29:17.663121 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network default is active
	I0731 21:29:17.663583 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Ensuring network mk-default-k8s-diff-port-755535 is active
	I0731 21:29:17.664059 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Getting domain xml...
	I0731 21:29:17.664837 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Creating domain...
	I0731 21:29:18.989801 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting to get IP...
	I0731 21:29:18.990936 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991376 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:18.991428 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:18.991344 1148583 retry.go:31] will retry after 247.770384ms: waiting for machine to come up
	I0731 21:29:19.241063 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241585 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.241658 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.241549 1148583 retry.go:31] will retry after 287.808437ms: waiting for machine to come up
	I0731 21:29:19.531237 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531849 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.531875 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.531777 1148583 retry.go:31] will retry after 317.584035ms: waiting for machine to come up
	I0731 21:29:19.851691 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852167 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:19.852202 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:19.852128 1148583 retry.go:31] will retry after 555.57435ms: waiting for machine to come up
	I0731 21:29:20.409812 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410356 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:20.410392 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:20.410280 1148583 retry.go:31] will retry after 721.969177ms: waiting for machine to come up
	I0731 21:29:20.195383 1147232 pod_ready.go:102] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:20.703603 1147232 pod_ready.go:92] pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.703634 1147232 pod_ready.go:81] duration metric: took 9.51823955s for pod "coredns-7db6d8ff4d-kscsg" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.703649 1147232 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724000 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.724036 1147232 pod_ready.go:81] duration metric: took 20.374673ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.724051 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732302 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:20.732326 1147232 pod_ready.go:81] duration metric: took 8.267565ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.732340 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747581 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.747609 1147232 pod_ready.go:81] duration metric: took 2.015261928s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.747619 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753322 1147232 pod_ready.go:92] pod "kube-proxy-m4www" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.753348 1147232 pod_ready.go:81] duration metric: took 5.72252ms for pod "kube-proxy-m4www" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.753359 1147232 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758310 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:29:22.758335 1147232 pod_ready.go:81] duration metric: took 4.970124ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:22.758346 1147232 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:20.731858 1147424 crio.go:462] duration metric: took 1.563734165s to copy over tarball
	I0731 21:29:20.732033 1147424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:23.813579 1147424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081445019s)
	I0731 21:29:23.813629 1147424 crio.go:469] duration metric: took 3.081657576s to extract the tarball
	I0731 21:29:23.813640 1147424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:23.855937 1147424 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:23.892640 1147424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 21:29:23.892676 1147424 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:23.892772 1147424 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.892797 1147424 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.892852 1147424 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.892776 1147424 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.893142 1147424 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.893240 1147424 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 21:29:23.893343 1147424 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.893348 1147424 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:23.894880 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:23.894783 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:23.895111 1147424 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:23.894968 1147424 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:23.895194 1147424 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:23.895489 1147424 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:23.895587 1147424 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 21:29:24.036855 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.039761 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.042658 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.045088 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.045098 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.048688 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.088535 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 21:29:24.218808 1147424 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 21:29:24.218845 1147424 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 21:29:24.218881 1147424 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 21:29:24.218918 1147424 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.218930 1147424 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 21:29:24.218936 1147424 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 21:29:24.218943 1147424 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.218965 1147424 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.218978 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218998 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.218890 1147424 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.219058 1147424 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 21:29:24.219078 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219079 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.219084 1147424 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.219135 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238540 1147424 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 21:29:24.238602 1147424 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 21:29:24.238653 1147424 ssh_runner.go:195] Run: which crictl
	I0731 21:29:24.238678 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 21:29:24.238697 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 21:29:24.238736 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 21:29:24.238794 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 21:29:24.238802 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 21:29:24.238851 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 21:29:24.366795 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 21:29:24.371307 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 21:29:24.371394 1147424 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 21:29:24.371436 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 21:29:24.371516 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 21:29:24.380026 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 21:29:24.380043 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 21:29:24.412112 1147424 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 21:29:24.523420 1147424 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:24.671943 1147424 cache_images.go:92] duration metric: took 779.240281ms to LoadCachedImages
	W0731 21:29:24.672078 1147424 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0731 21:29:24.672114 1147424 kubeadm.go:934] updating node { 192.168.72.107 8443 v1.20.0 crio true true} ...
	I0731 21:29:24.672267 1147424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275462 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:24.672897 1147424 ssh_runner.go:195] Run: crio config
	I0731 21:29:24.722662 1147424 cni.go:84] Creating CNI manager for ""
	I0731 21:29:24.722686 1147424 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:24.722696 1147424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:24.722717 1147424 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275462 NodeName:old-k8s-version-275462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 21:29:24.722892 1147424 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275462"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:24.722962 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 21:29:24.733178 1147424 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:24.733273 1147424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:24.743515 1147424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 21:29:24.760826 1147424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:24.779805 1147424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 21:29:24.798560 1147424 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:24.802406 1147424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:24.815015 1147424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:21.134251 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134731 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:21.134764 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:21.134687 1148583 retry.go:31] will retry after 934.566416ms: waiting for machine to come up
	I0731 21:29:22.071038 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.071631 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.071554 1148583 retry.go:31] will retry after 884.282326ms: waiting for machine to come up
	I0731 21:29:22.957241 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957617 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:22.957687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:22.957599 1148583 retry.go:31] will retry after 1.014946816s: waiting for machine to come up
	I0731 21:29:23.974435 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974845 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:23.974883 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:23.974807 1148583 retry.go:31] will retry after 1.519800108s: waiting for machine to come up
	I0731 21:29:25.496770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497303 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:25.497332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:25.497249 1148583 retry.go:31] will retry after 1.739198883s: waiting for machine to come up
	I0731 21:29:24.767123 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:27.265952 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:29.266044 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:24.937628 1147424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:24.956917 1147424 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462 for IP: 192.168.72.107
	I0731 21:29:24.956949 1147424 certs.go:194] generating shared ca certs ...
	I0731 21:29:24.956972 1147424 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:24.957180 1147424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:24.957243 1147424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:24.957258 1147424 certs.go:256] generating profile certs ...
	I0731 21:29:24.957385 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.key
	I0731 21:29:24.957468 1147424 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key.512f5421
	I0731 21:29:24.957520 1147424 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key
	I0731 21:29:24.957676 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:24.957719 1147424 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:24.957734 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:24.957770 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:24.957805 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:24.957837 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:24.957898 1147424 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:24.958772 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:24.998159 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:25.057520 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:25.098374 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:25.140601 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 21:29:25.187540 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:25.213821 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:25.240997 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:29:25.266970 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:25.292340 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:25.318838 1147424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:25.344071 1147424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:25.361756 1147424 ssh_runner.go:195] Run: openssl version
	I0731 21:29:25.368009 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:25.379741 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.384975 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.385052 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:25.390894 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:25.403007 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:25.415067 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422223 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.422310 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:25.429842 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:25.440874 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:25.451684 1147424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456190 1147424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.456259 1147424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:25.462311 1147424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:25.474253 1147424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:25.479088 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:25.485188 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:25.491404 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:25.498223 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:25.504935 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:25.511202 1147424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:25.517628 1147424 kubeadm.go:392] StartCluster: {Name:old-k8s-version-275462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-275462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:25.517767 1147424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:25.517832 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.555145 1147424 cri.go:89] found id: ""
	I0731 21:29:25.555227 1147424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:25.565732 1147424 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:25.565758 1147424 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:25.565821 1147424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:25.575700 1147424 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:25.576730 1147424 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-275462" does not appear in /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:25.577437 1147424 kubeconfig.go:62] /home/jenkins/minikube-integration/19360-1093692/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-275462" cluster setting kubeconfig missing "old-k8s-version-275462" context setting]
	I0731 21:29:25.578357 1147424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:25.626975 1147424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:25.637707 1147424 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.107
	I0731 21:29:25.637758 1147424 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:25.637773 1147424 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:25.637826 1147424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:25.674153 1147424 cri.go:89] found id: ""
	I0731 21:29:25.674240 1147424 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:25.692354 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:25.703047 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:25.703081 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:25.703140 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:29:25.712766 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:25.712884 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:25.723121 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:29:25.732767 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:25.732846 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:25.743055 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.752622 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:25.752699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:25.763763 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:29:25.773620 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:25.773699 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:25.784175 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:25.794182 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:25.908515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.676104 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:26.891081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.024837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:27.100397 1147424 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:27.100499 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.600582 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.101391 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:28.601068 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.101502 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:29.600838 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:27.239418 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239872 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:27.239916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:27.239806 1148583 retry.go:31] will retry after 1.907805681s: waiting for machine to come up
	I0731 21:29:29.149605 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150022 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:29.150049 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:29.149966 1148583 retry.go:31] will retry after 3.584697795s: waiting for machine to come up
	I0731 21:29:31.765270 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:34.264994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:30.101071 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:30.601377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.100907 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:31.600736 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.100741 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.601406 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.100616 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:33.601476 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.101619 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:34.601270 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:32.736055 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736539 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | unable to find current IP address of domain default-k8s-diff-port-755535 in network mk-default-k8s-diff-port-755535
	I0731 21:29:32.736574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | I0731 21:29:32.736495 1148583 retry.go:31] will retry after 4.026783834s: waiting for machine to come up
	I0731 21:29:38.016998 1146656 start.go:364] duration metric: took 55.868098686s to acquireMachinesLock for "no-preload-018891"
	I0731 21:29:38.017060 1146656 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:29:38.017069 1146656 fix.go:54] fixHost starting: 
	I0731 21:29:38.017509 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:38.017552 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:38.036034 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0731 21:29:38.036681 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:38.037291 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:29:38.037319 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:38.037687 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:38.037920 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:38.038078 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:29:38.040079 1146656 fix.go:112] recreateIfNeeded on no-preload-018891: state=Stopped err=<nil>
	I0731 21:29:38.040133 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	W0731 21:29:38.040317 1146656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:29:38.042575 1146656 out.go:177] * Restarting existing kvm2 VM for "no-preload-018891" ...
	I0731 21:29:36.766344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:39.265931 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:36.767067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767688 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has current primary IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.767744 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Found IP for machine: 192.168.39.145
	I0731 21:29:36.767774 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserving static IP address...
	I0731 21:29:36.768193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.768234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | skip adding static IP to network mk-default-k8s-diff-port-755535 - found existing host DHCP lease matching {name: "default-k8s-diff-port-755535", mac: "52:54:00:71:57:ff", ip: "192.168.39.145"}
	I0731 21:29:36.768256 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Reserved static IP address: 192.168.39.145
	I0731 21:29:36.768277 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Waiting for SSH to be available...
	I0731 21:29:36.768292 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Getting to WaitForSSH function...
	I0731 21:29:36.770423 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770687 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.770710 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.770880 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH client type: external
	I0731 21:29:36.770909 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa (-rw-------)
	I0731 21:29:36.770966 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:36.770989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | About to run SSH command:
	I0731 21:29:36.771004 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | exit 0
	I0731 21:29:36.892321 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:36.892633 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetConfigRaw
	I0731 21:29:36.893372 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:36.896249 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896647 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.896682 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.896983 1148013 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/config.json ...
	I0731 21:29:36.897231 1148013 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:36.897253 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:36.897507 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:36.900381 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900794 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:36.900832 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:36.900940 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:36.901137 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:36.901403 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:36.901591 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:36.901809 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:36.901823 1148013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:37.004424 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:37.004459 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004749 1148013 buildroot.go:166] provisioning hostname "default-k8s-diff-port-755535"
	I0731 21:29:37.004770 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.004989 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.007987 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008391 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.008439 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.008574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.008802 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.008981 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.009190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.009374 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.009588 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.009602 1148013 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-755535 && echo "default-k8s-diff-port-755535" | sudo tee /etc/hostname
	I0731 21:29:37.127160 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-755535
	
	I0731 21:29:37.127190 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.130282 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130701 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.130737 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.130924 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.131178 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131389 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.131537 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.131778 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.132017 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.132037 1148013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-755535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-755535/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-755535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:37.245157 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:37.245201 1148013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:37.245255 1148013 buildroot.go:174] setting up certificates
	I0731 21:29:37.245268 1148013 provision.go:84] configureAuth start
	I0731 21:29:37.245283 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetMachineName
	I0731 21:29:37.245628 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:37.248611 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.248910 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.248944 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.249109 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.251332 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251698 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.251727 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.251911 1148013 provision.go:143] copyHostCerts
	I0731 21:29:37.251973 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:37.251983 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:37.252036 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:37.252164 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:37.252173 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:37.252196 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:37.252258 1148013 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:37.252265 1148013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:37.252283 1148013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:37.252334 1148013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-755535 san=[127.0.0.1 192.168.39.145 default-k8s-diff-port-755535 localhost minikube]
	I0731 21:29:37.356985 1148013 provision.go:177] copyRemoteCerts
	I0731 21:29:37.357046 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:37.357077 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.359635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.359985 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.360014 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.360217 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.360421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.360670 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.360815 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.442709 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:37.467795 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 21:29:37.492389 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:29:37.515837 1148013 provision.go:87] duration metric: took 270.547831ms to configureAuth
	I0731 21:29:37.515882 1148013 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:37.516070 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:37.516200 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.519062 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519432 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.519469 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.519695 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.519920 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520141 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.520323 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.520481 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.520701 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.520726 1148013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:37.780006 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:37.780033 1148013 machine.go:97] duration metric: took 882.786941ms to provisionDockerMachine
	I0731 21:29:37.780047 1148013 start.go:293] postStartSetup for "default-k8s-diff-port-755535" (driver="kvm2")
	I0731 21:29:37.780059 1148013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:37.780081 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:37.780459 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:37.780493 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.783495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.783853 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.783886 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.784068 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.784322 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.784531 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.784714 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:37.866990 1148013 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:37.871294 1148013 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:37.871329 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:37.871408 1148013 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:37.871483 1148013 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:37.871584 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:37.881107 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:37.906964 1148013 start.go:296] duration metric: took 126.897843ms for postStartSetup
	I0731 21:29:37.907016 1148013 fix.go:56] duration metric: took 20.273760895s for fixHost
	I0731 21:29:37.907045 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:37.910120 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910452 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:37.910495 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:37.910747 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:37.910965 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911119 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:37.911255 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:37.911448 1148013 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:37.911690 1148013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0731 21:29:37.911705 1148013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:38.016788 1148013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461377.990571620
	
	I0731 21:29:38.016818 1148013 fix.go:216] guest clock: 1722461377.990571620
	I0731 21:29:38.016830 1148013 fix.go:229] Guest: 2024-07-31 21:29:37.99057162 +0000 UTC Remote: 2024-07-31 21:29:37.907020915 +0000 UTC m=+131.913986687 (delta=83.550705ms)
	I0731 21:29:38.016876 1148013 fix.go:200] guest clock delta is within tolerance: 83.550705ms
	I0731 21:29:38.016883 1148013 start.go:83] releasing machines lock for "default-k8s-diff-port-755535", held for 20.383695886s
	I0731 21:29:38.016916 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.017234 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:38.019995 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020405 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.020436 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.020641 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021180 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021387 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:38.021485 1148013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:38.021536 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.021665 1148013 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:38.021693 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:38.024445 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024777 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.024913 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.024946 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025214 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025258 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:38.025291 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:38.025461 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025626 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.025640 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:38.025915 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:38.025907 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.026067 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:38.026237 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:38.129588 1148013 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:38.135557 1148013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:38.276230 1148013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:38.281894 1148013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:38.281977 1148013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:38.298709 1148013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:38.298742 1148013 start.go:495] detecting cgroup driver to use...
	I0731 21:29:38.298815 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:38.316212 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:38.331845 1148013 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:38.331925 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:38.350284 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:38.365411 1148013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:38.502379 1148013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:38.659435 1148013 docker.go:233] disabling docker service ...
	I0731 21:29:38.659544 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:38.676451 1148013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:38.692936 1148013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:38.843766 1148013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:38.974723 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:38.989514 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:39.009753 1148013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:29:39.009822 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.020785 1148013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:39.020857 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.031679 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.047024 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.061692 1148013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:39.072901 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.084049 1148013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.101694 1148013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:39.118920 1148013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:39.128796 1148013 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:39.128869 1148013 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:39.143329 1148013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:39.153376 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:39.278414 1148013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:39.427377 1148013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:39.427493 1148013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:39.432178 1148013 start.go:563] Will wait 60s for crictl version
	I0731 21:29:39.432262 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:29:39.435949 1148013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:39.470366 1148013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:39.470494 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.498247 1148013 ssh_runner.go:195] Run: crio --version
	I0731 21:29:39.531071 1148013 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:29:35.101055 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:35.600782 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.101344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:36.600794 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.101402 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:37.601198 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.100947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:38.601332 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.101351 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.601319 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:39.532416 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetIP
	I0731 21:29:39.535677 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536015 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:39.536046 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:39.536341 1148013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:39.540305 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:39.553333 1148013 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:39.553464 1148013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:29:39.553514 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:39.592137 1148013 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:29:39.592216 1148013 ssh_runner.go:195] Run: which lz4
	I0731 21:29:39.596215 1148013 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:29:39.600203 1148013 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:29:39.600244 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:29:41.004825 1148013 crio.go:462] duration metric: took 1.408653613s to copy over tarball
	I0731 21:29:41.004930 1148013 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:29:38.043667 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Start
	I0731 21:29:38.043892 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring networks are active...
	I0731 21:29:38.044764 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network default is active
	I0731 21:29:38.045177 1146656 main.go:141] libmachine: (no-preload-018891) Ensuring network mk-no-preload-018891 is active
	I0731 21:29:38.045594 1146656 main.go:141] libmachine: (no-preload-018891) Getting domain xml...
	I0731 21:29:38.046459 1146656 main.go:141] libmachine: (no-preload-018891) Creating domain...
	I0731 21:29:39.353762 1146656 main.go:141] libmachine: (no-preload-018891) Waiting to get IP...
	I0731 21:29:39.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.355279 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.355383 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.355255 1148782 retry.go:31] will retry after 234.245005ms: waiting for machine to come up
	I0731 21:29:39.590814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.591332 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.591358 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.591270 1148782 retry.go:31] will retry after 362.949809ms: waiting for machine to come up
	I0731 21:29:39.956112 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:39.956694 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:39.956721 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:39.956639 1148782 retry.go:31] will retry after 469.324659ms: waiting for machine to come up
	I0731 21:29:40.427518 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.427997 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.428027 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.427953 1148782 retry.go:31] will retry after 463.172567ms: waiting for machine to come up
	I0731 21:29:40.893318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:40.893864 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:40.893890 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:40.893824 1148782 retry.go:31] will retry after 599.834904ms: waiting for machine to come up
	I0731 21:29:41.495844 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:41.496342 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:41.496372 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:41.496291 1148782 retry.go:31] will retry after 856.360903ms: waiting for machine to come up
	I0731 21:29:41.266267 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:43.267009 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:40.101530 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:40.601303 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.100720 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:41.600723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.100890 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.100765 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.101217 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:44.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:43.356436 1148013 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.351465263s)
	I0731 21:29:43.356470 1148013 crio.go:469] duration metric: took 2.351606996s to extract the tarball
	I0731 21:29:43.356479 1148013 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:29:43.397583 1148013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:43.443757 1148013 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:29:43.443784 1148013 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:29:43.443793 1148013 kubeadm.go:934] updating node { 192.168.39.145 8444 v1.30.3 crio true true} ...
	I0731 21:29:43.443954 1148013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-755535 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:29:43.444026 1148013 ssh_runner.go:195] Run: crio config
	I0731 21:29:43.494935 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:43.494959 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:43.494973 1148013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:29:43.495006 1148013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-755535 NodeName:default-k8s-diff-port-755535 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:29:43.495210 1148013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-755535"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:29:43.495303 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:29:43.505057 1148013 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:29:43.505176 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:29:43.514741 1148013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 21:29:43.534865 1148013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:29:43.554763 1148013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 21:29:43.572433 1148013 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0731 21:29:43.577403 1148013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:43.592858 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:43.737530 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:43.754632 1148013 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535 for IP: 192.168.39.145
	I0731 21:29:43.754662 1148013 certs.go:194] generating shared ca certs ...
	I0731 21:29:43.754686 1148013 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:43.754900 1148013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:29:43.754960 1148013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:29:43.754976 1148013 certs.go:256] generating profile certs ...
	I0731 21:29:43.755093 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/client.key
	I0731 21:29:43.755177 1148013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key.22420a8f
	I0731 21:29:43.755227 1148013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key
	I0731 21:29:43.755381 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:29:43.755424 1148013 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:29:43.755434 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:29:43.755455 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:29:43.755480 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:29:43.755500 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:29:43.755539 1148013 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:43.756235 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:29:43.800725 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:29:43.835648 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:29:43.880032 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:29:43.915459 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 21:29:43.943694 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:29:43.968578 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:29:43.993192 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/default-k8s-diff-port-755535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:29:44.017364 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:29:44.041303 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:29:44.065792 1148013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:29:44.089991 1148013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:29:44.107888 1148013 ssh_runner.go:195] Run: openssl version
	I0731 21:29:44.113758 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:29:44.125576 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130648 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.130727 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:29:44.137311 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:29:44.149135 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:29:44.160439 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165263 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.165329 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:29:44.171250 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:29:44.182798 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:29:44.194037 1148013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198577 1148013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.198658 1148013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:29:44.204406 1148013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:29:44.215573 1148013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:29:44.221587 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:29:44.229391 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:29:44.237371 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:29:44.244379 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:29:44.250414 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:29:44.256557 1148013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:29:44.262804 1148013 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-755535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-755535 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:29:44.262928 1148013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:29:44.262993 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.298720 1148013 cri.go:89] found id: ""
	I0731 21:29:44.298826 1148013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:29:44.310173 1148013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:29:44.310199 1148013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:29:44.310258 1148013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:29:44.321273 1148013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:29:44.322769 1148013 kubeconfig.go:125] found "default-k8s-diff-port-755535" server: "https://192.168.39.145:8444"
	I0731 21:29:44.325832 1148013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:29:44.336366 1148013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.145
	I0731 21:29:44.336407 1148013 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:29:44.336427 1148013 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:29:44.336498 1148013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:29:44.383500 1148013 cri.go:89] found id: ""
	I0731 21:29:44.383591 1148013 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:29:44.399444 1148013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:29:44.410687 1148013 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:29:44.410711 1148013 kubeadm.go:157] found existing configuration files:
	
	I0731 21:29:44.410769 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 21:29:44.420845 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:29:44.420925 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:29:44.430476 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 21:29:44.440198 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:29:44.440277 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:29:44.450195 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.459883 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:29:44.459966 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:29:44.470649 1148013 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 21:29:44.480689 1148013 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:29:44.480764 1148013 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:29:44.490628 1148013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:29:44.501343 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:44.642878 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.555233 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.766976 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.832896 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:45.907410 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:29:45.907508 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:42.354282 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:42.354765 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:42.354797 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:42.354694 1148782 retry.go:31] will retry after 1.044468751s: waiting for machine to come up
	I0731 21:29:43.400835 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:43.401345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:43.401402 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:43.401318 1148782 retry.go:31] will retry after 935.157631ms: waiting for machine to come up
	I0731 21:29:44.337853 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:44.338472 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:44.338505 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:44.338397 1148782 retry.go:31] will retry after 1.530891122s: waiting for machine to come up
	I0731 21:29:45.871035 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:45.871693 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:45.871734 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:45.871617 1148782 retry.go:31] will retry after 1.996010352s: waiting for machine to come up
	I0731 21:29:45.765589 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:47.765743 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:45.100963 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:45.601355 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.101354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.601416 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.100953 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:47.601551 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.100775 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:48.601528 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.101362 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:49.601101 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.407820 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.907790 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:46.924949 1148013 api_server.go:72] duration metric: took 1.017537991s to wait for apiserver process to appear ...
	I0731 21:29:46.924989 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:29:46.925016 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:49.933387 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:29:49.933431 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:29:49.933448 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.002123 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.002156 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.425320 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.430430 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.430465 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:50.926039 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:50.931251 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:29:50.931286 1148013 api_server.go:103] status: https://192.168.39.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:29:51.425157 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:29:51.430486 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:29:51.437067 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:29:51.437115 1148013 api_server.go:131] duration metric: took 4.512116778s to wait for apiserver health ...
	I0731 21:29:51.437131 1148013 cni.go:84] Creating CNI manager for ""
	I0731 21:29:51.437142 1148013 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:29:51.438770 1148013 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:29:47.869470 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:47.869928 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:47.869960 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:47.869867 1148782 retry.go:31] will retry after 1.758316686s: waiting for machine to come up
	I0731 21:29:49.630515 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:49.631000 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:49.631036 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:49.630936 1148782 retry.go:31] will retry after 2.39654611s: waiting for machine to come up
	I0731 21:29:51.440057 1148013 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:29:51.460432 1148013 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:29:51.479629 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:29:51.491000 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:29:51.491059 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:29:51.491076 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:29:51.491087 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:29:51.491110 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:29:51.491124 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:29:51.491139 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:29:51.491150 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:29:51.491222 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:29:51.491236 1148013 system_pods.go:74] duration metric: took 11.579003ms to wait for pod list to return data ...
	I0731 21:29:51.491252 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:29:51.495163 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:29:51.495206 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:29:51.495239 1148013 node_conditions.go:105] duration metric: took 3.977024ms to run NodePressure ...
	I0731 21:29:51.495263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:29:51.762752 1148013 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768504 1148013 kubeadm.go:739] kubelet initialised
	I0731 21:29:51.768541 1148013 kubeadm.go:740] duration metric: took 5.756089ms waiting for restarted kubelet to initialise ...
	I0731 21:29:51.768554 1148013 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:51.776242 1148013 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.783488 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783533 1148013 pod_ready.go:81] duration metric: took 7.250424ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.783547 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.783558 1148013 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.790100 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790143 1148013 pod_ready.go:81] duration metric: took 6.573129ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.790159 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.790170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.797457 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797498 1148013 pod_ready.go:81] duration metric: took 7.319359ms for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.797513 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.797533 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:51.883109 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883149 1148013 pod_ready.go:81] duration metric: took 85.605451ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:51.883162 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:51.883170 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.283454 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283484 1148013 pod_ready.go:81] duration metric: took 400.306586ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.283495 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-proxy-mqcmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.283511 1148013 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:52.682926 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682965 1148013 pod_ready.go:81] duration metric: took 399.442627ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:52.682982 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.682991 1148013 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:53.083528 1148013 pod_ready.go:97] node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083573 1148013 pod_ready.go:81] duration metric: took 400.571455ms for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:29:53.083590 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-755535" hosting pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:53.083601 1148013 pod_ready.go:38] duration metric: took 1.315033985s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:29:53.083623 1148013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:29:53.095349 1148013 ops.go:34] apiserver oom_adj: -16
	I0731 21:29:53.095379 1148013 kubeadm.go:597] duration metric: took 8.785172139s to restartPrimaryControlPlane
	I0731 21:29:53.095391 1148013 kubeadm.go:394] duration metric: took 8.832597905s to StartCluster
	I0731 21:29:53.095416 1148013 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.095513 1148013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:29:53.097384 1148013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:29:53.097693 1148013 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:29:53.097768 1148013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:29:53.097863 1148013 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097905 1148013 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.097914 1148013 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:29:53.097918 1148013 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.097949 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.097956 1148013 config.go:182] Loaded profile config "default-k8s-diff-port-755535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:29:53.097964 1148013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-755535"
	I0731 21:29:53.097960 1148013 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-755535"
	I0731 21:29:53.098052 1148013 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.098070 1148013 addons.go:243] addon metrics-server should already be in state true
	I0731 21:29:53.098129 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.098364 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098389 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098405 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098465 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.098544 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.098578 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.099612 1148013 out.go:177] * Verifying Kubernetes components...
	I0731 21:29:53.100943 1148013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:53.116043 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0731 21:29:53.116121 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0731 21:29:53.116663 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.116670 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.117278 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117297 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117558 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.117575 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.117662 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.118320 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.118358 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.118788 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34779
	I0731 21:29:53.118820 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.119468 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.119498 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.119509 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.120181 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.120208 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.120626 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.120828 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.125024 1148013 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-755535"
	W0731 21:29:53.125051 1148013 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:29:53.125087 1148013 host.go:66] Checking if "default-k8s-diff-port-755535" exists ...
	I0731 21:29:53.125470 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.125510 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.136521 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0731 21:29:53.137246 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.137866 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.137907 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.138331 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.138574 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.140269 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0731 21:29:53.140615 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.140722 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.141377 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.141402 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.141846 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.142108 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.142832 1148013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:53.143979 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0731 21:29:53.144037 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.144302 1148013 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.144321 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:29:53.144342 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.145270 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.145539 1148013 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:29:49.766048 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:52.266842 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:53.145875 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.145898 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.146651 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.146842 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:29:53.146863 1148013 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:29:53.146891 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.147198 1148013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:29:53.147235 1148013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:29:53.148082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149156 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.149247 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.149438 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.149635 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.149758 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.149890 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.150082 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150593 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.150624 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.150825 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.151024 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.151193 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.151423 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.164594 1148013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0731 21:29:53.165088 1148013 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:29:53.165634 1148013 main.go:141] libmachine: Using API Version  1
	I0731 21:29:53.165649 1148013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:29:53.165919 1148013 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:29:53.166093 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetState
	I0731 21:29:53.167775 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .DriverName
	I0731 21:29:53.168002 1148013 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.168016 1148013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:29:53.168032 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHHostname
	I0731 21:29:53.171696 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172236 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:57:ff", ip: ""} in network mk-default-k8s-diff-port-755535: {Iface:virbr2 ExpiryTime:2024-07-31 22:29:29 +0000 UTC Type:0 Mac:52:54:00:71:57:ff Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:default-k8s-diff-port-755535 Clientid:01:52:54:00:71:57:ff}
	I0731 21:29:53.172266 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | domain default-k8s-diff-port-755535 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:57:ff in network mk-default-k8s-diff-port-755535
	I0731 21:29:53.172492 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHPort
	I0731 21:29:53.172717 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHKeyPath
	I0731 21:29:53.172890 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .GetSSHUsername
	I0731 21:29:53.173081 1148013 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/default-k8s-diff-port-755535/id_rsa Username:docker}
	I0731 21:29:53.313528 1148013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:29:53.332410 1148013 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:29:53.467443 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:29:53.481915 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:29:53.481943 1148013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:29:53.503095 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:29:53.524005 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:29:53.524039 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:29:53.577476 1148013 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:53.577511 1148013 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:29:53.630711 1148013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:29:54.451991 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452029 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452078 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452115 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452387 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452404 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452412 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452421 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452526 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452551 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452565 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452574 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.452582 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.452667 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.452684 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452691 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.452849 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.452869 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.458865 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.458888 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.459191 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.459208 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472307 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472337 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.472690 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.472706 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.472713 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.472733 1148013 main.go:141] libmachine: Making call to close driver server
	I0731 21:29:54.472742 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) Calling .Close
	I0731 21:29:54.473021 1148013 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:29:54.473070 1148013 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:29:54.473074 1148013 main.go:141] libmachine: (default-k8s-diff-port-755535) DBG | Closing plugin on server side
	I0731 21:29:54.473086 1148013 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-755535"
	I0731 21:29:54.474920 1148013 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:29:50.101380 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:50.601347 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.101325 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:51.601381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.101364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:52.600852 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.101284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:53.601020 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.101330 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.601310 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:54.476085 1148013 addons.go:510] duration metric: took 1.378326564s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:29:55.338873 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:29:52.029262 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:52.029780 1146656 main.go:141] libmachine: (no-preload-018891) DBG | unable to find current IP address of domain no-preload-018891 in network mk-no-preload-018891
	I0731 21:29:52.029807 1146656 main.go:141] libmachine: (no-preload-018891) DBG | I0731 21:29:52.029695 1148782 retry.go:31] will retry after 2.74211918s: waiting for machine to come up
	I0731 21:29:54.773318 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.773762 1146656 main.go:141] libmachine: (no-preload-018891) Found IP for machine: 192.168.61.246
	I0731 21:29:54.773788 1146656 main.go:141] libmachine: (no-preload-018891) Reserving static IP address...
	I0731 21:29:54.773803 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has current primary IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.774221 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.774260 1146656 main.go:141] libmachine: (no-preload-018891) DBG | skip adding static IP to network mk-no-preload-018891 - found existing host DHCP lease matching {name: "no-preload-018891", mac: "52:54:00:3c:b2:a0", ip: "192.168.61.246"}
	I0731 21:29:54.774275 1146656 main.go:141] libmachine: (no-preload-018891) Reserved static IP address: 192.168.61.246
	I0731 21:29:54.774320 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Getting to WaitForSSH function...
	I0731 21:29:54.774343 1146656 main.go:141] libmachine: (no-preload-018891) Waiting for SSH to be available...
	I0731 21:29:54.776952 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.777352 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.777426 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH client type: external
	I0731 21:29:54.777466 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Using SSH private key: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa (-rw-------)
	I0731 21:29:54.777506 1146656 main.go:141] libmachine: (no-preload-018891) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:29:54.777522 1146656 main.go:141] libmachine: (no-preload-018891) DBG | About to run SSH command:
	I0731 21:29:54.777564 1146656 main.go:141] libmachine: (no-preload-018891) DBG | exit 0
	I0731 21:29:54.908253 1146656 main.go:141] libmachine: (no-preload-018891) DBG | SSH cmd err, output: <nil>: 
	I0731 21:29:54.908614 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetConfigRaw
	I0731 21:29:54.909339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:54.911937 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912315 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.912345 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.912621 1146656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/config.json ...
	I0731 21:29:54.912837 1146656 machine.go:94] provisionDockerMachine start ...
	I0731 21:29:54.912858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:54.913092 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:54.915328 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915698 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:54.915725 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:54.915862 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:54.916060 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916209 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:54.916385 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:54.916563 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:54.916797 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:54.916812 1146656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:29:55.032674 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:29:55.032715 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033152 1146656 buildroot.go:166] provisioning hostname "no-preload-018891"
	I0731 21:29:55.033189 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.033429 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.036142 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036488 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.036553 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.036710 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.036938 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.037373 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.037586 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.037851 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.037869 1146656 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-018891 && echo "no-preload-018891" | sudo tee /etc/hostname
	I0731 21:29:55.170895 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-018891
	
	I0731 21:29:55.170923 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.174018 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174357 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.174382 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.174594 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.174835 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175025 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.175153 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.175333 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.175578 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.175595 1146656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018891/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:29:55.296570 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:29:55.296606 1146656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19360-1093692/.minikube CaCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19360-1093692/.minikube}
	I0731 21:29:55.296634 1146656 buildroot.go:174] setting up certificates
	I0731 21:29:55.296645 1146656 provision.go:84] configureAuth start
	I0731 21:29:55.296658 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetMachineName
	I0731 21:29:55.297022 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:55.299891 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300300 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.300329 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.300525 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.302808 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303146 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.303176 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.303306 1146656 provision.go:143] copyHostCerts
	I0731 21:29:55.303365 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem, removing ...
	I0731 21:29:55.303375 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem
	I0731 21:29:55.303430 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/key.pem (1675 bytes)
	I0731 21:29:55.303533 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem, removing ...
	I0731 21:29:55.303541 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem
	I0731 21:29:55.303565 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.pem (1082 bytes)
	I0731 21:29:55.303638 1146656 exec_runner.go:144] found /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem, removing ...
	I0731 21:29:55.303645 1146656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem
	I0731 21:29:55.303662 1146656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19360-1093692/.minikube/cert.pem (1123 bytes)
	I0731 21:29:55.303773 1146656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem org=jenkins.no-preload-018891 san=[127.0.0.1 192.168.61.246 localhost minikube no-preload-018891]
	I0731 21:29:55.451740 1146656 provision.go:177] copyRemoteCerts
	I0731 21:29:55.451822 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:29:55.451858 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.454972 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455327 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.455362 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.455522 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.455783 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.455966 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.456166 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.541939 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:29:55.567967 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:29:55.593630 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:29:55.621511 1146656 provision.go:87] duration metric: took 324.845258ms to configureAuth
	I0731 21:29:55.621546 1146656 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:29:55.621737 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:29:55.621823 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.624639 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625021 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.625054 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.625277 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.625515 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625755 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.625921 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.626150 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:55.626404 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:55.626428 1146656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:29:55.896753 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:29:55.896785 1146656 machine.go:97] duration metric: took 983.934543ms to provisionDockerMachine
	I0731 21:29:55.896799 1146656 start.go:293] postStartSetup for "no-preload-018891" (driver="kvm2")
	I0731 21:29:55.896818 1146656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:29:55.896863 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:55.897196 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:29:55.897229 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:55.899769 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900156 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:55.900190 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:55.900383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:55.900612 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:55.900765 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:55.900903 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:55.987436 1146656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:29:55.991924 1146656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:29:55.991958 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/addons for local assets ...
	I0731 21:29:55.992027 1146656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19360-1093692/.minikube/files for local assets ...
	I0731 21:29:55.992144 1146656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem -> 11009762.pem in /etc/ssl/certs
	I0731 21:29:55.992312 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:29:56.002524 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:29:56.026998 1146656 start.go:296] duration metric: took 130.182157ms for postStartSetup
	I0731 21:29:56.027046 1146656 fix.go:56] duration metric: took 18.009977848s for fixHost
	I0731 21:29:56.027071 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.029907 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030303 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.030324 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.030493 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.030731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.030907 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.031055 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.031254 1146656 main.go:141] libmachine: Using SSH client type: native
	I0731 21:29:56.031490 1146656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.246 22 <nil> <nil>}
	I0731 21:29:56.031503 1146656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:29:56.149163 1146656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722461396.115095611
	
	I0731 21:29:56.149199 1146656 fix.go:216] guest clock: 1722461396.115095611
	I0731 21:29:56.149211 1146656 fix.go:229] Guest: 2024-07-31 21:29:56.115095611 +0000 UTC Remote: 2024-07-31 21:29:56.027049922 +0000 UTC m=+369.298206393 (delta=88.045689ms)
	I0731 21:29:56.149267 1146656 fix.go:200] guest clock delta is within tolerance: 88.045689ms
	I0731 21:29:56.149294 1146656 start.go:83] releasing machines lock for "no-preload-018891", held for 18.13224564s
	I0731 21:29:56.149320 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.149597 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:56.152941 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153307 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.153359 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.153492 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154130 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154353 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:29:56.154450 1146656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:29:56.154497 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.154650 1146656 ssh_runner.go:195] Run: cat /version.json
	I0731 21:29:56.154678 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:29:56.157376 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157795 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.157838 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.157858 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158006 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158227 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158396 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:56.158422 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.158421 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:56.158568 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:29:56.158646 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.158731 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:29:56.158879 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:29:56.159051 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:29:56.241170 1146656 ssh_runner.go:195] Run: systemctl --version
	I0731 21:29:56.259519 1146656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:29:56.414823 1146656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:29:56.420732 1146656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:29:56.420805 1146656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:29:56.438423 1146656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:29:56.438461 1146656 start.go:495] detecting cgroup driver to use...
	I0731 21:29:56.438567 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:29:56.456069 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:29:56.471320 1146656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:29:56.471399 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:29:56.486206 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:29:56.501601 1146656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:29:56.623367 1146656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:29:56.774879 1146656 docker.go:233] disabling docker service ...
	I0731 21:29:56.774969 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:29:56.792295 1146656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:29:56.809957 1146656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:29:56.961634 1146656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:29:57.102957 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:29:57.118907 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:29:57.139231 1146656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:29:57.139301 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.150471 1146656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:29:57.150547 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.160951 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.171556 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.182777 1146656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:29:57.196310 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.209689 1146656 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.227660 1146656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:29:57.238058 1146656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:29:57.248326 1146656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:29:57.248388 1146656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:29:57.261076 1146656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:29:57.272002 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:29:57.406445 1146656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:29:57.540657 1146656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:29:57.540765 1146656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:29:57.546161 1146656 start.go:563] Will wait 60s for crictl version
	I0731 21:29:57.546233 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.550021 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:29:57.589152 1146656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:29:57.589272 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.618944 1146656 ssh_runner.go:195] Run: crio --version
	I0731 21:29:57.650646 1146656 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:29:54.766019 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:57.264179 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:59.264724 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:29:55.101321 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:55.600950 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.100785 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:56.601322 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.101431 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.601331 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.101425 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:58.600958 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.100876 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:59.601349 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:29:57.837038 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.336837 1148013 node_ready.go:53] node "default-k8s-diff-port-755535" has status "Ready":"False"
	I0731 21:30:00.836595 1148013 node_ready.go:49] node "default-k8s-diff-port-755535" has status "Ready":"True"
	I0731 21:30:00.836632 1148013 node_ready.go:38] duration metric: took 7.504184626s for node "default-k8s-diff-port-755535" to be "Ready" ...
	I0731 21:30:00.836644 1148013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:00.841523 1148013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846346 1148013 pod_ready.go:92] pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.846372 1148013 pod_ready.go:81] duration metric: took 4.815855ms for pod "coredns-7db6d8ff4d-t9v4z" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.846383 1148013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851118 1148013 pod_ready.go:92] pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:00.851140 1148013 pod_ready.go:81] duration metric: took 4.751019ms for pod "etcd-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:00.851151 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:29:57.651874 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetIP
	I0731 21:29:57.655070 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655529 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:29:57.655572 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:29:57.655778 1146656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:29:57.659917 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:29:57.673863 1146656 kubeadm.go:883] updating cluster {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:29:57.674037 1146656 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:29:57.674099 1146656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:29:57.714187 1146656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:29:57.714225 1146656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 21:29:57.714285 1146656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.714317 1146656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.714345 1146656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.714370 1146656 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.714378 1146656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.714348 1146656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.714420 1146656 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.714458 1146656 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716109 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.716123 1146656 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.716147 1146656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.716161 1146656 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 21:29:57.716168 1146656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:29:57.716119 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.716527 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.716549 1146656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.848967 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.869777 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.881111 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 21:29:57.888022 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:57.892714 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:57.893611 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:57.908421 1146656 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 21:29:57.908493 1146656 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:57.908554 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:57.914040 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:57.985691 1146656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 21:29:57.985757 1146656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:57.985814 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.128813 1146656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 21:29:58.128930 1146656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.128947 1146656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 21:29:58.128996 1146656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.129046 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129061 1146656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 21:29:58.129088 1146656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.129115 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129000 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.129194 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 21:29:58.129262 1146656 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 21:29:58.129309 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 21:29:58.129312 1146656 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.129389 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:29:58.141411 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 21:29:58.141477 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 21:29:58.212758 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 21:29:58.212783 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212847 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 21:29:58.212860 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 21:29:58.212928 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.212933 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:29:58.226942 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227020 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.227057 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:29:58.227113 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:29:58.265352 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 21:29:58.265470 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:29:58.276064 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276115 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276128 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 21:29:58.276150 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276176 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 21:29:58.276186 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 21:29:58.276213 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 21:29:58.276248 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.276359 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:29:58.280583 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 21:29:58.363934 1146656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.050742 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.774531298s)
	I0731 21:30:01.050793 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 21:30:01.050832 1146656 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050926 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 21:30:01.050839 1146656 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.686857972s)
	I0731 21:30:01.051031 1146656 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 21:30:01.051073 1146656 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:01.051118 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:30:01.266241 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:03.764462 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:00.101336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:00.601036 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.101381 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:01.601371 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.100649 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.601354 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.101316 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:03.601374 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.101099 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:04.601146 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:02.860276 1148013 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:04.360452 1148013 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.360479 1148013 pod_ready.go:81] duration metric: took 3.509320908s for pod "kube-apiserver-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.360496 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367733 1148013 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.367757 1148013 pod_ready.go:81] duration metric: took 7.253266ms for pod "kube-controller-manager-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.367768 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372693 1148013 pod_ready.go:92] pod "kube-proxy-mqcmt" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.372719 1148013 pod_ready.go:81] duration metric: took 4.944626ms for pod "kube-proxy-mqcmt" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.372728 1148013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436318 1148013 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:04.436345 1148013 pod_ready.go:81] duration metric: took 63.609569ms for pod "kube-scheduler-default-k8s-diff-port-755535" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.436356 1148013 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:04.339084 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.288125508s)
	I0731 21:30:04.339126 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 21:30:04.339141 1146656 ssh_runner.go:235] Completed: which crictl: (3.288000381s)
	I0731 21:30:04.339164 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:04.339223 1146656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:04.339234 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 21:30:06.225796 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.886536121s)
	I0731 21:30:06.225852 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 21:30:06.225875 1146656 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.886627424s)
	I0731 21:30:06.225900 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.225933 1146656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 21:30:06.225987 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 21:30:06.226038 1146656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:05.764555 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:07.766002 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:05.100624 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:05.600680 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.101286 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.601308 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.100801 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:07.600703 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.101252 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:08.601341 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.101049 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:09.601284 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:06.443235 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.444797 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.950200 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:08.198750 1146656 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972673111s)
	I0731 21:30:08.198802 1146656 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 21:30:08.198831 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.972821334s)
	I0731 21:30:08.198850 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 21:30:08.198878 1146656 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:08.198956 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 21:30:10.054141 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.855149734s)
	I0731 21:30:10.054181 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 21:30:10.054209 1146656 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:10.054263 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 21:30:11.506212 1146656 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.45191421s)
	I0731 21:30:11.506252 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 21:30:11.506294 1146656 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:11.506390 1146656 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 21:30:10.263896 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.264903 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:14.265574 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:10.100825 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:10.601345 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.101377 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:11.601357 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.100679 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:12.600724 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.101278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.600992 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.101359 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:14.601364 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:13.443063 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:15.443624 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:12.356725 1146656 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 21:30:12.356768 1146656 cache_images.go:123] Successfully loaded all cached images
	I0731 21:30:12.356773 1146656 cache_images.go:92] duration metric: took 14.642536081s to LoadCachedImages
	I0731 21:30:12.356786 1146656 kubeadm.go:934] updating node { 192.168.61.246 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:30:12.356931 1146656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-018891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:30:12.357036 1146656 ssh_runner.go:195] Run: crio config
	I0731 21:30:12.404684 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:12.404711 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:12.404728 1146656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:30:12.404752 1146656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.246 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018891 NodeName:no-preload-018891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:30:12.404917 1146656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018891"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:30:12.404999 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:30:12.416421 1146656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:30:12.416516 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:30:12.426572 1146656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 21:30:12.444613 1146656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:30:12.461161 1146656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 21:30:12.478872 1146656 ssh_runner.go:195] Run: grep 192.168.61.246	control-plane.minikube.internal$ /etc/hosts
	I0731 21:30:12.482736 1146656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:30:12.502603 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:12.617670 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:12.634477 1146656 certs.go:68] Setting up /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891 for IP: 192.168.61.246
	I0731 21:30:12.634508 1146656 certs.go:194] generating shared ca certs ...
	I0731 21:30:12.634532 1146656 certs.go:226] acquiring lock for ca certs: {Name:mkfaba598c13a8e6da4324f625faa476553ec3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:12.634740 1146656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key
	I0731 21:30:12.634799 1146656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key
	I0731 21:30:12.634813 1146656 certs.go:256] generating profile certs ...
	I0731 21:30:12.634961 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.key
	I0731 21:30:12.635052 1146656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key.54e88c10
	I0731 21:30:12.635108 1146656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key
	I0731 21:30:12.635312 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem (1338 bytes)
	W0731 21:30:12.635379 1146656 certs.go:480] ignoring /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976_empty.pem, impossibly tiny 0 bytes
	I0731 21:30:12.635394 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:30:12.635433 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:30:12.635465 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:30:12.635500 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/key.pem (1675 bytes)
	I0731 21:30:12.635557 1146656 certs.go:484] found cert: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem (1708 bytes)
	I0731 21:30:12.636406 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:30:12.672156 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:30:12.702346 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:30:12.731602 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 21:30:12.777601 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:30:12.813409 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 21:30:12.841076 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:30:12.866418 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:30:12.890716 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/ssl/certs/11009762.pem --> /usr/share/ca-certificates/11009762.pem (1708 bytes)
	I0731 21:30:12.915792 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:30:12.940826 1146656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19360-1093692/.minikube/certs/1100976.pem --> /usr/share/ca-certificates/1100976.pem (1338 bytes)
	I0731 21:30:12.966374 1146656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:30:12.984533 1146656 ssh_runner.go:195] Run: openssl version
	I0731 21:30:12.990538 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009762.pem && ln -fs /usr/share/ca-certificates/11009762.pem /etc/ssl/certs/11009762.pem"
	I0731 21:30:13.002053 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006781 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 20:21 /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.006862 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009762.pem
	I0731 21:30:13.012728 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11009762.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:30:13.024167 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:30:13.035617 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040041 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.040150 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:30:13.046193 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:30:13.058141 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1100976.pem && ln -fs /usr/share/ca-certificates/1100976.pem /etc/ssl/certs/1100976.pem"
	I0731 21:30:13.070085 1146656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074720 1146656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 20:21 /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.074811 1146656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1100976.pem
	I0731 21:30:13.080498 1146656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1100976.pem /etc/ssl/certs/51391683.0"
	I0731 21:30:13.092497 1146656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:30:13.097275 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:30:13.103762 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:30:13.110267 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:30:13.118325 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:30:13.124784 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:30:13.131502 1146656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:30:13.138736 1146656 kubeadm.go:392] StartCluster: {Name:no-preload-018891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-018891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:30:13.138837 1146656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:30:13.138888 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.178222 1146656 cri.go:89] found id: ""
	I0731 21:30:13.178304 1146656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:30:13.188552 1146656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:30:13.188580 1146656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:30:13.188634 1146656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:30:13.198424 1146656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:30:13.199620 1146656 kubeconfig.go:125] found "no-preload-018891" server: "https://192.168.61.246:8443"
	I0731 21:30:13.202067 1146656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:30:13.213244 1146656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.246
	I0731 21:30:13.213286 1146656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:30:13.213303 1146656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:30:13.213719 1146656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:30:13.253396 1146656 cri.go:89] found id: ""
	I0731 21:30:13.253478 1146656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:30:13.270269 1146656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:30:13.280405 1146656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:30:13.280431 1146656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:30:13.280479 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:30:13.289979 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:30:13.290047 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:30:13.299871 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:30:13.309257 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:30:13.309342 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:30:13.319593 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.329418 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:30:13.329486 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:30:13.339419 1146656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:30:13.348971 1146656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:30:13.349036 1146656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:30:13.358887 1146656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:30:13.368643 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:13.485786 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.401198 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.599529 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.677307 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:14.765353 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:30:14.765468 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.266329 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.766054 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.786157 1146656 api_server.go:72] duration metric: took 1.020803565s to wait for apiserver process to appear ...
	I0731 21:30:15.786189 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:30:15.786217 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:16.265710 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.766148 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:18.439856 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.439896 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.439914 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.492649 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:30:18.492690 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:30:18.787081 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:18.810263 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:18.810302 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.286734 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.291964 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:30:19.291999 1146656 api_server.go:103] status: https://192.168.61.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:30:19.786505 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:30:19.796699 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:30:19.807525 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:30:19.807566 1146656 api_server.go:131] duration metric: took 4.02136792s to wait for apiserver health ...
	I0731 21:30:19.807579 1146656 cni.go:84] Creating CNI manager for ""
	I0731 21:30:19.807588 1146656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:30:19.809353 1146656 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:30:15.101218 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:15.600733 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.101137 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:16.601585 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.101343 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.101295 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:18.601307 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.100682 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:19.601155 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:17.942857 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.943771 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:19.810433 1146656 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:30:19.821002 1146656 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:30:19.868402 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:30:19.883129 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:30:19.883180 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:30:19.883192 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:30:19.883204 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:30:19.883212 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:30:19.883221 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:30:19.883229 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:30:19.883242 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:30:19.883261 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:30:19.883274 1146656 system_pods.go:74] duration metric: took 14.843323ms to wait for pod list to return data ...
	I0731 21:30:19.883284 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:30:19.897327 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:30:19.897368 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:30:19.897382 1146656 node_conditions.go:105] duration metric: took 14.091172ms to run NodePressure ...
	I0731 21:30:19.897407 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:30:20.196896 1146656 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 21:30:20.202966 1146656 kubeadm.go:739] kubelet initialised
	I0731 21:30:20.202990 1146656 kubeadm.go:740] duration metric: took 6.059782ms waiting for restarted kubelet to initialise ...
	I0731 21:30:20.203000 1146656 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:20.208123 1146656 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.214186 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214236 1146656 pod_ready.go:81] duration metric: took 6.07909ms for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.214247 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.214253 1146656 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.220223 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220256 1146656 pod_ready.go:81] duration metric: took 5.988701ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.220267 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "etcd-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.220273 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.228507 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228536 1146656 pod_ready.go:81] duration metric: took 8.255655ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.228545 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-apiserver-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.228553 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.272704 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272743 1146656 pod_ready.go:81] duration metric: took 44.182664ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.272755 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.272777 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:20.673129 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673158 1146656 pod_ready.go:81] duration metric: took 400.361902ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:20.673170 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-proxy-x2dnn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:20.673177 1146656 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.072429 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072460 1146656 pod_ready.go:81] duration metric: took 399.27644ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.072471 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "kube-scheduler-no-preload-018891" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.072478 1146656 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:21.472593 1146656 pod_ready.go:97] node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472626 1146656 pod_ready.go:81] duration metric: took 400.13982ms for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:30:21.472637 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-018891" hosting pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:21.472645 1146656 pod_ready.go:38] duration metric: took 1.26963694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:21.472664 1146656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:30:21.484323 1146656 ops.go:34] apiserver oom_adj: -16
	I0731 21:30:21.484351 1146656 kubeadm.go:597] duration metric: took 8.295763074s to restartPrimaryControlPlane
	I0731 21:30:21.484361 1146656 kubeadm.go:394] duration metric: took 8.34563439s to StartCluster
	I0731 21:30:21.484379 1146656 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.484460 1146656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:30:21.486137 1146656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:30:21.486409 1146656 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:30:21.486485 1146656 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:30:21.486584 1146656 addons.go:69] Setting storage-provisioner=true in profile "no-preload-018891"
	I0731 21:30:21.486615 1146656 addons.go:234] Setting addon storage-provisioner=true in "no-preload-018891"
	I0731 21:30:21.486646 1146656 addons.go:69] Setting metrics-server=true in profile "no-preload-018891"
	I0731 21:30:21.486692 1146656 config.go:182] Loaded profile config "no-preload-018891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:30:21.486707 1146656 addons.go:234] Setting addon metrics-server=true in "no-preload-018891"
	W0731 21:30:21.486718 1146656 addons.go:243] addon metrics-server should already be in state true
	I0731 21:30:21.486759 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	W0731 21:30:21.486664 1146656 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:30:21.486850 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.486615 1146656 addons.go:69] Setting default-storageclass=true in profile "no-preload-018891"
	I0731 21:30:21.486954 1146656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018891"
	I0731 21:30:21.487107 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487150 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487230 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487267 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.487371 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.487406 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.488066 1146656 out.go:177] * Verifying Kubernetes components...
	I0731 21:30:21.489491 1146656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:30:21.503876 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0731 21:30:21.504017 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0731 21:30:21.504086 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0731 21:30:21.504598 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504642 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.504682 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.505173 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505193 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505199 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505213 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505305 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.505327 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.505554 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505629 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505639 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.505831 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.506154 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506164 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.506183 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.508914 1146656 addons.go:234] Setting addon default-storageclass=true in "no-preload-018891"
	W0731 21:30:21.508932 1146656 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:30:21.508957 1146656 host.go:66] Checking if "no-preload-018891" exists ...
	I0731 21:30:21.509187 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.509213 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.526066 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0731 21:30:21.528731 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.529285 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.529311 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.529784 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.530000 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.532450 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.534700 1146656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:30:21.536115 1146656 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.536141 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:30:21.536170 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.540044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540592 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.540622 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.540851 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.541104 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.541270 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.541425 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.547128 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0731 21:30:21.547184 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0731 21:30:21.547786 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.547865 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.548426 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548445 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548429 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.548466 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.548780 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548845 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.548959 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.549425 1146656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:30:21.549473 1146656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:30:21.551116 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.553068 1146656 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:30:21.554401 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:30:21.554418 1146656 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:30:21.554445 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.557987 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558385 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.558410 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.558728 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.558976 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.559164 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.559326 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.569320 1146656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0731 21:30:21.569956 1146656 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:30:21.570511 1146656 main.go:141] libmachine: Using API Version  1
	I0731 21:30:21.570534 1146656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:30:21.571119 1146656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:30:21.571339 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetState
	I0731 21:30:21.573316 1146656 main.go:141] libmachine: (no-preload-018891) Calling .DriverName
	I0731 21:30:21.573563 1146656 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.573585 1146656 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:30:21.573604 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHHostname
	I0731 21:30:21.576643 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577012 1146656 main.go:141] libmachine: (no-preload-018891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b2:a0", ip: ""} in network mk-no-preload-018891: {Iface:virbr1 ExpiryTime:2024-07-31 22:29:49 +0000 UTC Type:0 Mac:52:54:00:3c:b2:a0 Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:no-preload-018891 Clientid:01:52:54:00:3c:b2:a0}
	I0731 21:30:21.577044 1146656 main.go:141] libmachine: (no-preload-018891) DBG | domain no-preload-018891 has defined IP address 192.168.61.246 and MAC address 52:54:00:3c:b2:a0 in network mk-no-preload-018891
	I0731 21:30:21.577214 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHPort
	I0731 21:30:21.577511 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHKeyPath
	I0731 21:30:21.577688 1146656 main.go:141] libmachine: (no-preload-018891) Calling .GetSSHUsername
	I0731 21:30:21.577849 1146656 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/no-preload-018891/id_rsa Username:docker}
	I0731 21:30:21.700050 1146656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:30:21.717247 1146656 node_ready.go:35] waiting up to 6m0s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:21.798175 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:30:21.818043 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:30:21.818078 1146656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:30:21.823805 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:30:21.862781 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:30:21.862812 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:30:21.898427 1146656 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:21.898457 1146656 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:30:21.948766 1146656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:30:23.027256 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.229032744s)
	I0731 21:30:23.027318 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027322 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203467073s)
	I0731 21:30:23.027367 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027383 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027401 1146656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.078593532s)
	I0731 21:30:23.027335 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027442 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027459 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027708 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027714 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027723 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027728 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027732 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027738 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027740 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027746 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027794 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.027808 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.027818 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.027814 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.027827 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.027991 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028003 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028037 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028056 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028061 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.028071 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.028081 1146656 addons.go:475] Verifying addon metrics-server=true in "no-preload-018891"
	I0731 21:30:23.028084 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.028119 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.034930 1146656 main.go:141] libmachine: Making call to close driver server
	I0731 21:30:23.034965 1146656 main.go:141] libmachine: (no-preload-018891) Calling .Close
	I0731 21:30:23.035312 1146656 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:30:23.035333 1146656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:30:23.035346 1146656 main.go:141] libmachine: (no-preload-018891) DBG | Closing plugin on server side
	I0731 21:30:23.037042 1146656 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0731 21:30:21.264247 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.264691 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:20.100856 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:20.601336 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.101059 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.601023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.100791 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:22.601360 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:23.600731 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.101318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:24.601285 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:21.945141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:24.442664 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:23.038375 1146656 addons.go:510] duration metric: took 1.551892195s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0731 21:30:23.721386 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.721450 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:25.264972 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:27.266151 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:25.101043 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:25.601045 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.101312 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:26.600559 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:27.100884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:27.100987 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:27.138104 1147424 cri.go:89] found id: ""
	I0731 21:30:27.138142 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.138154 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:27.138163 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:27.138233 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:27.175030 1147424 cri.go:89] found id: ""
	I0731 21:30:27.175068 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.175080 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:27.175088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:27.175158 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:27.209891 1147424 cri.go:89] found id: ""
	I0731 21:30:27.209925 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.209934 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:27.209941 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:27.209992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:27.247117 1147424 cri.go:89] found id: ""
	I0731 21:30:27.247154 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.247163 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:27.247170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:27.247236 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:27.286595 1147424 cri.go:89] found id: ""
	I0731 21:30:27.286625 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.286633 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:27.286639 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:27.286695 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:27.321169 1147424 cri.go:89] found id: ""
	I0731 21:30:27.321201 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.321218 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:27.321226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:27.321310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:27.356278 1147424 cri.go:89] found id: ""
	I0731 21:30:27.356306 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.356317 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:27.356323 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:27.356386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:27.390351 1147424 cri.go:89] found id: ""
	I0731 21:30:27.390378 1147424 logs.go:276] 0 containers: []
	W0731 21:30:27.390387 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:27.390398 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:27.390412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:27.440412 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:27.440451 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:27.454295 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:27.454330 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:27.575971 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:27.575999 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:27.576018 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:27.639090 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:27.639141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:26.442847 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.943311 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:28.221333 1146656 node_ready.go:53] node "no-preload-018891" has status "Ready":"False"
	I0731 21:30:29.221116 1146656 node_ready.go:49] node "no-preload-018891" has status "Ready":"True"
	I0731 21:30:29.221150 1146656 node_ready.go:38] duration metric: took 7.50385465s for node "no-preload-018891" to be "Ready" ...
	I0731 21:30:29.221161 1146656 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:30:29.226655 1146656 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:31.233713 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:29.764835 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:31.764914 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:34.264305 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:30.177467 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:30.191103 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:30.191179 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:30.226529 1147424 cri.go:89] found id: ""
	I0731 21:30:30.226575 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.226584 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:30.226591 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:30.226653 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:30.262162 1147424 cri.go:89] found id: ""
	I0731 21:30:30.262193 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.262202 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:30.262209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:30.262275 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:30.301663 1147424 cri.go:89] found id: ""
	I0731 21:30:30.301698 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.301706 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:30.301713 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:30.301769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:30.342073 1147424 cri.go:89] found id: ""
	I0731 21:30:30.342105 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.342117 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:30.342125 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:30.342199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:30.375980 1147424 cri.go:89] found id: ""
	I0731 21:30:30.376013 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.376024 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:30.376033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:30.376114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:30.409852 1147424 cri.go:89] found id: ""
	I0731 21:30:30.409892 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.409900 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:30.409907 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:30.409960 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:30.444551 1147424 cri.go:89] found id: ""
	I0731 21:30:30.444592 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.444604 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:30.444612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:30.444672 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:30.481953 1147424 cri.go:89] found id: ""
	I0731 21:30:30.481987 1147424 logs.go:276] 0 containers: []
	W0731 21:30:30.481995 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:30.482006 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:30.482024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:30.533740 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:30.533785 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:30.546789 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:30.546831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:30.622294 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:30.622321 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:30.622338 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:30.693871 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:30.693922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.236318 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:33.249452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:33.249545 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:33.288064 1147424 cri.go:89] found id: ""
	I0731 21:30:33.288110 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.288124 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:33.288133 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:33.288208 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:33.321269 1147424 cri.go:89] found id: ""
	I0731 21:30:33.321298 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.321307 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:33.321313 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:33.321368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:33.357078 1147424 cri.go:89] found id: ""
	I0731 21:30:33.357125 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.357133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:33.357140 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:33.357206 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:33.393556 1147424 cri.go:89] found id: ""
	I0731 21:30:33.393587 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.393598 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:33.393608 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:33.393674 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:33.427311 1147424 cri.go:89] found id: ""
	I0731 21:30:33.427347 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.427359 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:33.427368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:33.427438 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:33.462424 1147424 cri.go:89] found id: ""
	I0731 21:30:33.462463 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.462474 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:33.462482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:33.462557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:33.499271 1147424 cri.go:89] found id: ""
	I0731 21:30:33.499302 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.499311 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:33.499320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:33.499395 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:33.536341 1147424 cri.go:89] found id: ""
	I0731 21:30:33.536372 1147424 logs.go:276] 0 containers: []
	W0731 21:30:33.536382 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:33.536392 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:33.536406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:33.606582 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:33.606621 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:33.606640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:33.682704 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:33.682757 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:33.722410 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:33.722456 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:33.778845 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:33.778888 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:31.442470 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.443996 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:35.944317 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:33.735206 1146656 pod_ready.go:102] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.234503 1146656 pod_ready.go:92] pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.234535 1146656 pod_ready.go:81] duration metric: took 7.007846047s for pod "coredns-5cfdc65f69-9w4w4" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.234557 1146656 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240361 1146656 pod_ready.go:92] pod "etcd-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.240396 1146656 pod_ready.go:81] duration metric: took 5.830601ms for pod "etcd-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.240410 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246667 1146656 pod_ready.go:92] pod "kube-apiserver-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.246697 1146656 pod_ready.go:81] duration metric: took 6.278754ms for pod "kube-apiserver-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.246707 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252616 1146656 pod_ready.go:92] pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.252646 1146656 pod_ready.go:81] duration metric: took 5.931893ms for pod "kube-controller-manager-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.252657 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257929 1146656 pod_ready.go:92] pod "kube-proxy-x2dnn" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.257962 1146656 pod_ready.go:81] duration metric: took 5.298921ms for pod "kube-proxy-x2dnn" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.257976 1146656 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632686 1146656 pod_ready.go:92] pod "kube-scheduler-no-preload-018891" in "kube-system" namespace has status "Ready":"True"
	I0731 21:30:36.632723 1146656 pod_ready.go:81] duration metric: took 374.739035ms for pod "kube-scheduler-no-preload-018891" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.632737 1146656 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	I0731 21:30:36.265196 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.265807 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:36.293569 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:36.311120 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:36.311235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:36.350558 1147424 cri.go:89] found id: ""
	I0731 21:30:36.350589 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.350596 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:36.350602 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:36.350655 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:36.387804 1147424 cri.go:89] found id: ""
	I0731 21:30:36.387841 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.387849 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:36.387855 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:36.387912 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:36.427225 1147424 cri.go:89] found id: ""
	I0731 21:30:36.427263 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.427273 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:36.427280 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:36.427367 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:36.470864 1147424 cri.go:89] found id: ""
	I0731 21:30:36.470896 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.470908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:36.470917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:36.470985 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:36.523075 1147424 cri.go:89] found id: ""
	I0731 21:30:36.523109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.523117 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:36.523124 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:36.523188 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:36.598071 1147424 cri.go:89] found id: ""
	I0731 21:30:36.598109 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.598120 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:36.598129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:36.598200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:36.638277 1147424 cri.go:89] found id: ""
	I0731 21:30:36.638314 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.638326 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:36.638335 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:36.638402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:36.673112 1147424 cri.go:89] found id: ""
	I0731 21:30:36.673152 1147424 logs.go:276] 0 containers: []
	W0731 21:30:36.673164 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:36.673180 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:36.673197 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:36.728197 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:36.728245 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:36.742034 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:36.742072 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:36.815584 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:36.815617 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:36.815635 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:36.894418 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:36.894464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.436637 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:39.449708 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:39.449823 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:39.490244 1147424 cri.go:89] found id: ""
	I0731 21:30:39.490281 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.490293 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:39.490301 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:39.490365 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:39.523568 1147424 cri.go:89] found id: ""
	I0731 21:30:39.523601 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.523625 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:39.523640 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:39.523723 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:39.558966 1147424 cri.go:89] found id: ""
	I0731 21:30:39.559004 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.559017 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:39.559025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:39.559092 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:39.592002 1147424 cri.go:89] found id: ""
	I0731 21:30:39.592037 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.592049 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:39.592058 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:39.592145 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:39.624596 1147424 cri.go:89] found id: ""
	I0731 21:30:39.624634 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.624646 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:39.624655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:39.624722 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:39.658928 1147424 cri.go:89] found id: ""
	I0731 21:30:39.658957 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.658965 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:39.658973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:39.659024 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:39.692725 1147424 cri.go:89] found id: ""
	I0731 21:30:39.692766 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.692779 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:39.692788 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:39.692857 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:39.728770 1147424 cri.go:89] found id: ""
	I0731 21:30:39.728811 1147424 logs.go:276] 0 containers: []
	W0731 21:30:39.728823 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:39.728837 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:39.728854 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:39.799162 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:39.799193 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:39.799213 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:38.443560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.942937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:38.638956 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.640407 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:40.764748 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.765335 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:39.884581 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:39.884625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:39.923650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:39.923687 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:39.977735 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:39.977787 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.491668 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:42.513530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:42.513623 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:42.563932 1147424 cri.go:89] found id: ""
	I0731 21:30:42.563968 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.563982 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:42.563991 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:42.564067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:42.598089 1147424 cri.go:89] found id: ""
	I0731 21:30:42.598122 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.598131 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:42.598138 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:42.598199 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:42.631493 1147424 cri.go:89] found id: ""
	I0731 21:30:42.631528 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.631540 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:42.631549 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:42.631626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:42.668358 1147424 cri.go:89] found id: ""
	I0731 21:30:42.668395 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.668408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:42.668416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:42.668484 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:42.701115 1147424 cri.go:89] found id: ""
	I0731 21:30:42.701150 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.701161 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:42.701170 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:42.701248 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:42.736626 1147424 cri.go:89] found id: ""
	I0731 21:30:42.736665 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.736678 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:42.736687 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:42.736759 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:42.769864 1147424 cri.go:89] found id: ""
	I0731 21:30:42.769897 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.769904 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:42.769910 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:42.769964 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:42.803441 1147424 cri.go:89] found id: ""
	I0731 21:30:42.803477 1147424 logs.go:276] 0 containers: []
	W0731 21:30:42.803486 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:42.803497 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:42.803514 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:42.817556 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:42.817591 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:42.885011 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:42.885040 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:42.885055 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:42.964799 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:42.964851 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:43.015621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:43.015675 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:42.942984 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.943126 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:42.641436 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.139036 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:44.766405 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:46.766520 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.265061 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:45.568268 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:45.580867 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:45.580952 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:45.614028 1147424 cri.go:89] found id: ""
	I0731 21:30:45.614066 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.614076 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:45.614082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:45.614152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:45.650207 1147424 cri.go:89] found id: ""
	I0731 21:30:45.650235 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.650245 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:45.650254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:45.650321 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:45.684405 1147424 cri.go:89] found id: ""
	I0731 21:30:45.684433 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.684444 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:45.684452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:45.684540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:45.718355 1147424 cri.go:89] found id: ""
	I0731 21:30:45.718397 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.718408 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:45.718416 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:45.718501 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:45.755484 1147424 cri.go:89] found id: ""
	I0731 21:30:45.755532 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.755554 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:45.755563 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:45.755638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:45.791243 1147424 cri.go:89] found id: ""
	I0731 21:30:45.791277 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.791290 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:45.791298 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:45.791368 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:45.827118 1147424 cri.go:89] found id: ""
	I0731 21:30:45.827157 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.827169 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:45.827177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:45.827244 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:45.866131 1147424 cri.go:89] found id: ""
	I0731 21:30:45.866166 1147424 logs.go:276] 0 containers: []
	W0731 21:30:45.866177 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:45.866191 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:45.866207 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:45.919945 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:45.919988 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:45.935650 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:45.935685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:46.008387 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:46.008417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:46.008437 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:46.087063 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:46.087119 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:48.626079 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:48.639423 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:48.639502 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:48.673340 1147424 cri.go:89] found id: ""
	I0731 21:30:48.673371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.673380 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:48.673388 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:48.673457 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:48.707662 1147424 cri.go:89] found id: ""
	I0731 21:30:48.707694 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.707704 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:48.707712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:48.707786 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:48.741679 1147424 cri.go:89] found id: ""
	I0731 21:30:48.741716 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.741728 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:48.741736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:48.741807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:48.780939 1147424 cri.go:89] found id: ""
	I0731 21:30:48.780969 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.780980 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:48.780987 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:48.781050 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:48.818882 1147424 cri.go:89] found id: ""
	I0731 21:30:48.818912 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.818920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:48.818927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:48.818982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:48.858012 1147424 cri.go:89] found id: ""
	I0731 21:30:48.858044 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.858056 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:48.858065 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:48.858140 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:48.894753 1147424 cri.go:89] found id: ""
	I0731 21:30:48.894787 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.894795 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:48.894802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:48.894863 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:48.927020 1147424 cri.go:89] found id: ""
	I0731 21:30:48.927056 1147424 logs.go:276] 0 containers: []
	W0731 21:30:48.927066 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:48.927078 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:48.927099 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:48.983634 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:48.983678 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:48.998249 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:48.998280 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:49.068981 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:49.069006 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:49.069024 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:49.154613 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:49.154658 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:46.943398 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:48.953937 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:47.139335 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:49.139858 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.139967 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.764837 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.265088 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:51.693023 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:51.706145 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:51.706246 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:51.737003 1147424 cri.go:89] found id: ""
	I0731 21:30:51.737032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.737041 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:51.737046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:51.737114 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:51.772405 1147424 cri.go:89] found id: ""
	I0731 21:30:51.772441 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.772452 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:51.772461 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:51.772518 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.805868 1147424 cri.go:89] found id: ""
	I0731 21:30:51.805900 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.805910 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:51.805918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:51.805986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:51.841996 1147424 cri.go:89] found id: ""
	I0731 21:30:51.842032 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.842045 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:51.842054 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:51.842130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:51.874698 1147424 cri.go:89] found id: ""
	I0731 21:30:51.874734 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.874746 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:51.874755 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:51.874824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:51.908924 1147424 cri.go:89] found id: ""
	I0731 21:30:51.908955 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.908967 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:51.908973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:51.909037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:51.945056 1147424 cri.go:89] found id: ""
	I0731 21:30:51.945085 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.945096 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:51.945104 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:51.945167 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:51.979480 1147424 cri.go:89] found id: ""
	I0731 21:30:51.979513 1147424 logs.go:276] 0 containers: []
	W0731 21:30:51.979538 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:51.979552 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:51.979571 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:52.055960 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:52.055992 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:52.056009 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:52.132988 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:52.133039 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:52.172054 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:52.172098 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:52.226311 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:52.226355 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:54.741919 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:54.755241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:54.755319 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:54.789532 1147424 cri.go:89] found id: ""
	I0731 21:30:54.789563 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.789574 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:54.789583 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:54.789652 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:54.824196 1147424 cri.go:89] found id: ""
	I0731 21:30:54.824229 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.824240 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:54.824248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:54.824314 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:51.443199 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.944480 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:53.140181 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:55.144767 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:56.265184 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.765513 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:54.860579 1147424 cri.go:89] found id: ""
	I0731 21:30:54.860611 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.860620 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:54.860627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:54.860679 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:54.897438 1147424 cri.go:89] found id: ""
	I0731 21:30:54.897472 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.897484 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:54.897493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:54.897569 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:54.935283 1147424 cri.go:89] found id: ""
	I0731 21:30:54.935318 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.935330 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:54.935339 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:54.935409 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:54.970819 1147424 cri.go:89] found id: ""
	I0731 21:30:54.970850 1147424 logs.go:276] 0 containers: []
	W0731 21:30:54.970858 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:54.970865 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:54.970916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:55.004983 1147424 cri.go:89] found id: ""
	I0731 21:30:55.005019 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.005029 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:55.005038 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:55.005111 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:55.040711 1147424 cri.go:89] found id: ""
	I0731 21:30:55.040740 1147424 logs.go:276] 0 containers: []
	W0731 21:30:55.040749 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:55.040760 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:55.040774 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:55.117255 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:55.117290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:55.117308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:55.195423 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:55.195466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:55.234017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:55.234050 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:55.287518 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:55.287562 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:57.802888 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:30:57.816049 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:30:57.816152 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:30:57.849582 1147424 cri.go:89] found id: ""
	I0731 21:30:57.849616 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.849627 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:30:57.849635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:30:57.849713 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:30:57.883334 1147424 cri.go:89] found id: ""
	I0731 21:30:57.883371 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.883382 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:30:57.883391 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:30:57.883459 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:30:57.917988 1147424 cri.go:89] found id: ""
	I0731 21:30:57.918018 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.918028 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:30:57.918034 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:30:57.918095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:30:57.956169 1147424 cri.go:89] found id: ""
	I0731 21:30:57.956205 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.956217 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:30:57.956229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:30:57.956296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:30:57.992259 1147424 cri.go:89] found id: ""
	I0731 21:30:57.992291 1147424 logs.go:276] 0 containers: []
	W0731 21:30:57.992301 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:30:57.992308 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:30:57.992371 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:30:58.027969 1147424 cri.go:89] found id: ""
	I0731 21:30:58.027996 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.028006 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:30:58.028013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:30:58.028065 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:30:58.063018 1147424 cri.go:89] found id: ""
	I0731 21:30:58.063048 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.063057 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:30:58.063064 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:30:58.063117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:30:58.097096 1147424 cri.go:89] found id: ""
	I0731 21:30:58.097131 1147424 logs.go:276] 0 containers: []
	W0731 21:30:58.097143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:30:58.097158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:30:58.097175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:30:58.137311 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:30:58.137341 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:30:58.186533 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:30:58.186575 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:30:58.200436 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:30:58.200469 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:30:58.270006 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:30:58.270033 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:30:58.270053 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:30:56.444446 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:58.942906 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.943227 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:30:57.639057 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.140108 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:01.265139 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:03.266080 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:00.855423 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:00.868032 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:00.868128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:00.901453 1147424 cri.go:89] found id: ""
	I0731 21:31:00.901486 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.901498 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:00.901506 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:00.901586 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:00.940566 1147424 cri.go:89] found id: ""
	I0731 21:31:00.940598 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.940614 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:00.940623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:00.940693 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:00.975729 1147424 cri.go:89] found id: ""
	I0731 21:31:00.975767 1147424 logs.go:276] 0 containers: []
	W0731 21:31:00.975778 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:00.975785 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:00.975852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:01.010713 1147424 cri.go:89] found id: ""
	I0731 21:31:01.010747 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.010759 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:01.010768 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:01.010842 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:01.044675 1147424 cri.go:89] found id: ""
	I0731 21:31:01.044709 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.044718 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:01.044725 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:01.044785 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:01.078574 1147424 cri.go:89] found id: ""
	I0731 21:31:01.078614 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.078625 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:01.078634 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:01.078696 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:01.116013 1147424 cri.go:89] found id: ""
	I0731 21:31:01.116051 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.116062 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:01.116071 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:01.116161 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:01.152596 1147424 cri.go:89] found id: ""
	I0731 21:31:01.152631 1147424 logs.go:276] 0 containers: []
	W0731 21:31:01.152640 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:01.152650 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:01.152666 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:01.203674 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:01.203726 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:01.218212 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:01.218261 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:01.290579 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:01.290604 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:01.290621 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:01.369885 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:01.369929 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:03.910280 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:03.923195 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:03.923276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:03.958378 1147424 cri.go:89] found id: ""
	I0731 21:31:03.958411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.958420 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:03.958427 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:03.958496 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:03.993096 1147424 cri.go:89] found id: ""
	I0731 21:31:03.993128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:03.993139 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:03.993148 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:03.993219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:04.029519 1147424 cri.go:89] found id: ""
	I0731 21:31:04.029552 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.029561 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:04.029569 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:04.029625 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:04.065597 1147424 cri.go:89] found id: ""
	I0731 21:31:04.065633 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.065643 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:04.065652 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:04.065719 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:04.101708 1147424 cri.go:89] found id: ""
	I0731 21:31:04.101744 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.101755 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:04.101763 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:04.101835 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:04.137732 1147424 cri.go:89] found id: ""
	I0731 21:31:04.137773 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.137783 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:04.137792 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:04.137866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:04.173141 1147424 cri.go:89] found id: ""
	I0731 21:31:04.173173 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.173188 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:04.173197 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:04.173269 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:04.208707 1147424 cri.go:89] found id: ""
	I0731 21:31:04.208742 1147424 logs.go:276] 0 containers: []
	W0731 21:31:04.208753 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:04.208770 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:04.208789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:04.279384 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:04.279417 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:04.279498 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:04.362158 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:04.362203 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:04.401372 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:04.401412 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:04.453988 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:04.454047 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:03.443745 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.942529 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:02.639283 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:04.639372 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:05.765358 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:08.265854 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:06.968373 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:06.982182 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:06.982268 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:07.018082 1147424 cri.go:89] found id: ""
	I0731 21:31:07.018112 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.018122 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:07.018129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:07.018197 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:07.050272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.050309 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.050319 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:07.050325 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:07.050392 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:07.085174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.085206 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.085215 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:07.085221 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:07.085285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:07.119239 1147424 cri.go:89] found id: ""
	I0731 21:31:07.119274 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.119282 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:07.119289 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:07.119353 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:07.156846 1147424 cri.go:89] found id: ""
	I0731 21:31:07.156876 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.156883 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:07.156889 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:07.156942 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:07.191272 1147424 cri.go:89] found id: ""
	I0731 21:31:07.191305 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.191314 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:07.191320 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:07.191384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:07.231174 1147424 cri.go:89] found id: ""
	I0731 21:31:07.231209 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.231221 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:07.231231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:07.231295 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:07.266525 1147424 cri.go:89] found id: ""
	I0731 21:31:07.266551 1147424 logs.go:276] 0 containers: []
	W0731 21:31:07.266558 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:07.266567 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:07.266589 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:07.306626 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:07.306659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:07.360568 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:07.360625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:07.374630 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:07.374665 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:07.444054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:07.444081 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:07.444118 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:07.942657 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.943080 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:07.140848 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:09.639749 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.266538 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.268527 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:10.030591 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:10.043498 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:10.043571 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:10.076835 1147424 cri.go:89] found id: ""
	I0731 21:31:10.076875 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.076887 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:10.076897 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:10.076966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:10.111342 1147424 cri.go:89] found id: ""
	I0731 21:31:10.111384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.111396 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:10.111404 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:10.111473 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:10.146858 1147424 cri.go:89] found id: ""
	I0731 21:31:10.146896 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.146911 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:10.146920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:10.146989 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:10.180682 1147424 cri.go:89] found id: ""
	I0731 21:31:10.180717 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.180729 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:10.180738 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:10.180804 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:10.215147 1147424 cri.go:89] found id: ""
	I0731 21:31:10.215177 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.215186 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:10.215192 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:10.215249 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:10.248291 1147424 cri.go:89] found id: ""
	I0731 21:31:10.248327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.248336 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:10.248343 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:10.248398 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:10.284207 1147424 cri.go:89] found id: ""
	I0731 21:31:10.284241 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.284252 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:10.284259 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:10.284325 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:10.318286 1147424 cri.go:89] found id: ""
	I0731 21:31:10.318322 1147424 logs.go:276] 0 containers: []
	W0731 21:31:10.318331 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:10.318342 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:10.318356 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:10.368429 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:10.368476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:10.383638 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:10.383673 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:10.450696 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:10.450720 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:10.450742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:10.530413 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:10.530458 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.084947 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:13.098074 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:13.098156 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:13.132915 1147424 cri.go:89] found id: ""
	I0731 21:31:13.132952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.132962 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:13.132968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:13.133037 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:13.173568 1147424 cri.go:89] found id: ""
	I0731 21:31:13.173597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.173605 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:13.173612 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:13.173668 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:13.207356 1147424 cri.go:89] found id: ""
	I0731 21:31:13.207388 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.207402 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:13.207411 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:13.207478 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:13.243452 1147424 cri.go:89] found id: ""
	I0731 21:31:13.243482 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.243493 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:13.243502 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:13.243587 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:13.278682 1147424 cri.go:89] found id: ""
	I0731 21:31:13.278719 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.278729 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:13.278736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:13.278794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:13.312698 1147424 cri.go:89] found id: ""
	I0731 21:31:13.312727 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.312735 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:13.312742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:13.312796 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:13.346223 1147424 cri.go:89] found id: ""
	I0731 21:31:13.346259 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.346270 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:13.346279 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:13.346350 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:13.380778 1147424 cri.go:89] found id: ""
	I0731 21:31:13.380819 1147424 logs.go:276] 0 containers: []
	W0731 21:31:13.380833 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:13.380847 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:13.380889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:13.394337 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:13.394372 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:13.472260 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:13.472290 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:13.472308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:13.549561 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:13.549608 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:13.589373 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:13.589416 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:11.943150 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.443284 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:12.140029 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.641142 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:14.765639 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.265180 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.265765 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:16.143472 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:16.155966 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:16.156039 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:16.194187 1147424 cri.go:89] found id: ""
	I0731 21:31:16.194216 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.194224 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:16.194231 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:16.194299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:16.228700 1147424 cri.go:89] found id: ""
	I0731 21:31:16.228738 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.228751 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:16.228760 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:16.228844 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:16.261597 1147424 cri.go:89] found id: ""
	I0731 21:31:16.261629 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.261640 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:16.261647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:16.261716 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:16.299664 1147424 cri.go:89] found id: ""
	I0731 21:31:16.299697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.299709 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:16.299718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:16.299780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:16.350144 1147424 cri.go:89] found id: ""
	I0731 21:31:16.350172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.350181 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:16.350188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:16.350254 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:16.385259 1147424 cri.go:89] found id: ""
	I0731 21:31:16.385294 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.385303 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:16.385310 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:16.385364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:16.419555 1147424 cri.go:89] found id: ""
	I0731 21:31:16.419597 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.419610 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:16.419619 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:16.419714 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:16.455956 1147424 cri.go:89] found id: ""
	I0731 21:31:16.455993 1147424 logs.go:276] 0 containers: []
	W0731 21:31:16.456005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:16.456029 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:16.456048 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:16.493234 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:16.493269 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:16.544931 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:16.544975 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:16.559513 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:16.559553 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:16.625127 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.625158 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:16.625176 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.200306 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:19.213303 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:19.213393 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:19.247139 1147424 cri.go:89] found id: ""
	I0731 21:31:19.247171 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.247179 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:19.247186 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:19.247245 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:19.282630 1147424 cri.go:89] found id: ""
	I0731 21:31:19.282659 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.282668 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:19.282674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:19.282740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:19.317287 1147424 cri.go:89] found id: ""
	I0731 21:31:19.317327 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.317338 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:19.317345 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:19.317410 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:19.352680 1147424 cri.go:89] found id: ""
	I0731 21:31:19.352718 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.352738 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:19.352747 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:19.352820 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:19.385653 1147424 cri.go:89] found id: ""
	I0731 21:31:19.385697 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.385709 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:19.385718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:19.385794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:19.425552 1147424 cri.go:89] found id: ""
	I0731 21:31:19.425582 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.425591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:19.425598 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:19.425654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:19.461717 1147424 cri.go:89] found id: ""
	I0731 21:31:19.461753 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.461766 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:19.461775 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:19.461852 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:19.497504 1147424 cri.go:89] found id: ""
	I0731 21:31:19.497542 1147424 logs.go:276] 0 containers: []
	W0731 21:31:19.497554 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:19.497567 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:19.497592 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:19.571818 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:19.571867 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:19.611053 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:19.611091 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:19.662174 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:19.662220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:19.676489 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:19.676526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:19.750718 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:16.943653 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.443833 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:17.140073 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:19.639048 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.639213 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:21.764897 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.765013 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:22.251175 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:22.265094 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:22.265186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:22.298628 1147424 cri.go:89] found id: ""
	I0731 21:31:22.298665 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.298676 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:22.298684 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:22.298754 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:22.336851 1147424 cri.go:89] found id: ""
	I0731 21:31:22.336888 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.336900 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:22.336909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:22.336982 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:22.373362 1147424 cri.go:89] found id: ""
	I0731 21:31:22.373397 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.373409 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:22.373417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:22.373498 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:22.409578 1147424 cri.go:89] found id: ""
	I0731 21:31:22.409606 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.409614 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:22.409621 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:22.409675 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:22.446427 1147424 cri.go:89] found id: ""
	I0731 21:31:22.446458 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.446469 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:22.446477 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:22.446547 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:22.480629 1147424 cri.go:89] found id: ""
	I0731 21:31:22.480679 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.480691 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:22.480700 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:22.480769 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:22.515017 1147424 cri.go:89] found id: ""
	I0731 21:31:22.515058 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.515070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:22.515078 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:22.515151 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:22.552433 1147424 cri.go:89] found id: ""
	I0731 21:31:22.552462 1147424 logs.go:276] 0 containers: []
	W0731 21:31:22.552470 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:22.552480 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:22.552493 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:22.567822 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:22.567862 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:22.640554 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:22.640585 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:22.640603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:22.732714 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:22.732776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:22.790478 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:22.790515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:21.941836 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.945561 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:23.639434 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.640934 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.765376 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.264346 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:25.352413 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:25.364739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:25.364828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:25.398119 1147424 cri.go:89] found id: ""
	I0731 21:31:25.398158 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.398171 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:25.398184 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:25.398255 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:25.432874 1147424 cri.go:89] found id: ""
	I0731 21:31:25.432908 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.432919 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:25.432928 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:25.432986 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:25.467669 1147424 cri.go:89] found id: ""
	I0731 21:31:25.467702 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.467711 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:25.467717 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:25.467783 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:25.502331 1147424 cri.go:89] found id: ""
	I0731 21:31:25.502364 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.502373 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:25.502379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:25.502434 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:25.535888 1147424 cri.go:89] found id: ""
	I0731 21:31:25.535917 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.535924 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:25.535931 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:25.535990 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:25.568398 1147424 cri.go:89] found id: ""
	I0731 21:31:25.568427 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.568443 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:25.568451 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:25.568554 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:25.602724 1147424 cri.go:89] found id: ""
	I0731 21:31:25.602751 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.602759 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:25.602766 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:25.602825 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:25.635990 1147424 cri.go:89] found id: ""
	I0731 21:31:25.636021 1147424 logs.go:276] 0 containers: []
	W0731 21:31:25.636032 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:25.636045 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:25.636063 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:25.687984 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:25.688030 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:25.702979 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:25.703010 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:25.768470 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:25.768498 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:25.768519 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:25.845432 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:25.845481 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.383725 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:28.397046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:28.397130 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:28.436675 1147424 cri.go:89] found id: ""
	I0731 21:31:28.436707 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.436716 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:28.436723 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:28.436780 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:28.474084 1147424 cri.go:89] found id: ""
	I0731 21:31:28.474114 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.474122 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:28.474129 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:28.474186 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:28.512448 1147424 cri.go:89] found id: ""
	I0731 21:31:28.512485 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.512496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:28.512505 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:28.512575 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:28.557548 1147424 cri.go:89] found id: ""
	I0731 21:31:28.557579 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.557591 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:28.557599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:28.557664 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:28.600492 1147424 cri.go:89] found id: ""
	I0731 21:31:28.600526 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.600545 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:28.600553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:28.600628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:28.645067 1147424 cri.go:89] found id: ""
	I0731 21:31:28.645093 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.645101 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:28.645107 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:28.645171 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:28.678391 1147424 cri.go:89] found id: ""
	I0731 21:31:28.678431 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.678444 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:28.678452 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:28.678522 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:28.712230 1147424 cri.go:89] found id: ""
	I0731 21:31:28.712260 1147424 logs.go:276] 0 containers: []
	W0731 21:31:28.712268 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:28.712278 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:28.712297 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:28.779362 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:28.779389 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:28.779403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:28.861192 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:28.861243 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:28.900747 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:28.900781 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:28.953135 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:28.953183 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:26.442998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.443518 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.943322 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:28.139072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.638724 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:30.264991 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.764482 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:31.467806 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:31.481274 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:31.481345 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:31.516704 1147424 cri.go:89] found id: ""
	I0731 21:31:31.516741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.516754 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:31.516765 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:31.516824 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:31.553299 1147424 cri.go:89] found id: ""
	I0731 21:31:31.553332 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.553341 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:31.553348 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:31.553402 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:31.587834 1147424 cri.go:89] found id: ""
	I0731 21:31:31.587864 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.587874 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:31.587881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:31.587939 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:31.623164 1147424 cri.go:89] found id: ""
	I0731 21:31:31.623194 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.623203 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:31.623209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:31.623265 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:31.659118 1147424 cri.go:89] found id: ""
	I0731 21:31:31.659151 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.659158 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:31.659165 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:31.659219 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:31.697260 1147424 cri.go:89] found id: ""
	I0731 21:31:31.697297 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.697308 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:31.697317 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:31.697375 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:31.732585 1147424 cri.go:89] found id: ""
	I0731 21:31:31.732623 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.732635 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:31.732644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:31.732698 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:31.770922 1147424 cri.go:89] found id: ""
	I0731 21:31:31.770952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:31.770964 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:31.770976 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:31.770992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:31.823747 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:31.823805 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:31.837367 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:31.837406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:31.912937 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:31.912958 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:31.912972 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:31.991008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:31.991061 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.528933 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:34.552722 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:34.552807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:34.587277 1147424 cri.go:89] found id: ""
	I0731 21:31:34.587315 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.587326 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:34.587337 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:34.587417 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:34.619919 1147424 cri.go:89] found id: ""
	I0731 21:31:34.619952 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.619961 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:34.619968 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:34.620033 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:34.654967 1147424 cri.go:89] found id: ""
	I0731 21:31:34.655000 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.655007 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:34.655014 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:34.655066 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:34.689092 1147424 cri.go:89] found id: ""
	I0731 21:31:34.689128 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.689139 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:34.689147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:34.689217 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:34.725112 1147424 cri.go:89] found id: ""
	I0731 21:31:34.725145 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.725153 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:34.725159 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:34.725215 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:34.760377 1147424 cri.go:89] found id: ""
	I0731 21:31:34.760411 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.760422 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:34.760430 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:34.760500 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:34.796413 1147424 cri.go:89] found id: ""
	I0731 21:31:34.796445 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.796460 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:34.796468 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:34.796540 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:34.833243 1147424 cri.go:89] found id: ""
	I0731 21:31:34.833277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:34.833288 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:34.833309 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:34.833328 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:32.943881 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:35.442928 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:32.638850 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.640521 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.766140 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.264336 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.268433 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:34.911486 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:34.911552 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:34.952167 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:34.952200 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:35.010995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:35.011041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:35.025756 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:35.025795 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:35.110465 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:37.610914 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:37.623848 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:37.623935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:37.660355 1147424 cri.go:89] found id: ""
	I0731 21:31:37.660384 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.660392 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:37.660398 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:37.660456 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:37.694935 1147424 cri.go:89] found id: ""
	I0731 21:31:37.694966 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.694975 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:37.694982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:37.695048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:37.729438 1147424 cri.go:89] found id: ""
	I0731 21:31:37.729472 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.729485 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:37.729493 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:37.729570 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:37.766412 1147424 cri.go:89] found id: ""
	I0731 21:31:37.766440 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.766449 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:37.766457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:37.766519 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:37.803830 1147424 cri.go:89] found id: ""
	I0731 21:31:37.803865 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.803875 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:37.803884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:37.803956 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:37.838698 1147424 cri.go:89] found id: ""
	I0731 21:31:37.838730 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.838741 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:37.838749 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:37.838819 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:37.873274 1147424 cri.go:89] found id: ""
	I0731 21:31:37.873312 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.873324 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:37.873332 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:37.873404 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:37.907801 1147424 cri.go:89] found id: ""
	I0731 21:31:37.907835 1147424 logs.go:276] 0 containers: []
	W0731 21:31:37.907859 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:37.907870 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:37.907893 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:37.962192 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:37.962233 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:37.976530 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:37.976577 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:38.048551 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:38.048584 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:38.048603 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:38.122957 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:38.123003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:37.942944 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.442336 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:37.139834 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:39.141085 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.640176 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:41.766169 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:43.767226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:40.663623 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:40.677119 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:40.677184 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:40.710893 1147424 cri.go:89] found id: ""
	I0731 21:31:40.710923 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.710932 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:40.710939 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:40.710996 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:40.746166 1147424 cri.go:89] found id: ""
	I0731 21:31:40.746203 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.746216 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:40.746223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:40.746296 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:40.789323 1147424 cri.go:89] found id: ""
	I0731 21:31:40.789353 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.789362 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:40.789368 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:40.789433 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:40.826731 1147424 cri.go:89] found id: ""
	I0731 21:31:40.826766 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.826775 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:40.826782 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:40.826843 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:40.865533 1147424 cri.go:89] found id: ""
	I0731 21:31:40.865562 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.865570 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:40.865576 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:40.865628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:40.900523 1147424 cri.go:89] found id: ""
	I0731 21:31:40.900555 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.900564 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:40.900571 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:40.900628 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:40.934140 1147424 cri.go:89] found id: ""
	I0731 21:31:40.934172 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.934181 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:40.934187 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:40.934252 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:40.969989 1147424 cri.go:89] found id: ""
	I0731 21:31:40.970033 1147424 logs.go:276] 0 containers: []
	W0731 21:31:40.970045 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:40.970058 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:40.970076 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:41.021416 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:41.021464 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:41.035947 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:41.035978 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:41.102101 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:41.102126 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:41.102141 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:41.182412 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:41.182457 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:43.727586 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:43.740633 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:43.740725 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:43.775305 1147424 cri.go:89] found id: ""
	I0731 21:31:43.775343 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.775354 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:43.775363 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:43.775426 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:43.813410 1147424 cri.go:89] found id: ""
	I0731 21:31:43.813441 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.813449 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:43.813455 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:43.813510 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:43.848924 1147424 cri.go:89] found id: ""
	I0731 21:31:43.848959 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.848971 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:43.848979 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:43.849048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:43.884911 1147424 cri.go:89] found id: ""
	I0731 21:31:43.884950 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.884962 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:43.884971 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:43.885041 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:43.918244 1147424 cri.go:89] found id: ""
	I0731 21:31:43.918277 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.918286 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:43.918292 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:43.918348 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:43.952166 1147424 cri.go:89] found id: ""
	I0731 21:31:43.952200 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.952211 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:43.952220 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:43.952299 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:43.985756 1147424 cri.go:89] found id: ""
	I0731 21:31:43.985790 1147424 logs.go:276] 0 containers: []
	W0731 21:31:43.985850 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:43.985863 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:43.985916 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:44.020480 1147424 cri.go:89] found id: ""
	I0731 21:31:44.020516 1147424 logs.go:276] 0 containers: []
	W0731 21:31:44.020528 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:44.020542 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:44.020560 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:44.058344 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:44.058398 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:44.110703 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:44.110751 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:44.124735 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:44.124771 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:44.193412 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:44.193445 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:44.193463 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:42.442910 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.443829 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:44.140083 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.640177 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.265466 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.265667 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:46.775651 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:46.789288 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:46.789384 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.822997 1147424 cri.go:89] found id: ""
	I0731 21:31:46.823032 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.823044 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:46.823053 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:46.823123 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:46.857000 1147424 cri.go:89] found id: ""
	I0731 21:31:46.857030 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.857039 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:46.857046 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:46.857112 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:46.890362 1147424 cri.go:89] found id: ""
	I0731 21:31:46.890392 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.890404 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:46.890417 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:46.890483 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:46.922819 1147424 cri.go:89] found id: ""
	I0731 21:31:46.922848 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.922864 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:46.922871 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:46.922935 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:46.957333 1147424 cri.go:89] found id: ""
	I0731 21:31:46.957363 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.957371 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:46.957376 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:46.957444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:46.990795 1147424 cri.go:89] found id: ""
	I0731 21:31:46.990830 1147424 logs.go:276] 0 containers: []
	W0731 21:31:46.990840 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:46.990849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:46.990922 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:47.025144 1147424 cri.go:89] found id: ""
	I0731 21:31:47.025174 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.025185 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:47.025194 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:47.025263 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:47.062624 1147424 cri.go:89] found id: ""
	I0731 21:31:47.062658 1147424 logs.go:276] 0 containers: []
	W0731 21:31:47.062667 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:47.062677 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:47.062691 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:47.112698 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:47.112742 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:47.127240 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:47.127276 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:47.195034 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:47.195062 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:47.195081 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:47.277532 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:47.277574 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:49.814610 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:49.828213 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:49.828291 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:46.944364 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.442118 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:48.640243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.640580 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:50.764302 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:52.764441 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:49.861951 1147424 cri.go:89] found id: ""
	I0731 21:31:49.861982 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.861991 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:49.861998 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:49.862054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:49.898601 1147424 cri.go:89] found id: ""
	I0731 21:31:49.898630 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.898638 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:49.898644 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:49.898711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:49.933615 1147424 cri.go:89] found id: ""
	I0731 21:31:49.933652 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.933665 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:49.933673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:49.933742 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:49.970356 1147424 cri.go:89] found id: ""
	I0731 21:31:49.970395 1147424 logs.go:276] 0 containers: []
	W0731 21:31:49.970416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:49.970425 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:49.970503 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:50.004186 1147424 cri.go:89] found id: ""
	I0731 21:31:50.004220 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.004232 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:50.004241 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:50.004316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:50.037701 1147424 cri.go:89] found id: ""
	I0731 21:31:50.037741 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.037753 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:50.037761 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:50.037834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:50.074358 1147424 cri.go:89] found id: ""
	I0731 21:31:50.074390 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.074399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:50.074409 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:50.074474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:50.109052 1147424 cri.go:89] found id: ""
	I0731 21:31:50.109083 1147424 logs.go:276] 0 containers: []
	W0731 21:31:50.109091 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:50.109101 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:50.109116 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:50.167891 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:50.167935 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:50.181132 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:50.181179 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:50.247835 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:50.247865 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:50.247882 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:50.328733 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:50.328779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:52.867344 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:52.880275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:52.880355 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:52.913980 1147424 cri.go:89] found id: ""
	I0731 21:31:52.914015 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.914024 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:52.914030 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:52.914095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:52.947833 1147424 cri.go:89] found id: ""
	I0731 21:31:52.947866 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.947874 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:52.947880 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:52.947947 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:52.981345 1147424 cri.go:89] found id: ""
	I0731 21:31:52.981380 1147424 logs.go:276] 0 containers: []
	W0731 21:31:52.981393 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:52.981401 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:52.981470 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:53.016253 1147424 cri.go:89] found id: ""
	I0731 21:31:53.016283 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.016292 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:53.016299 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:53.016351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:53.049683 1147424 cri.go:89] found id: ""
	I0731 21:31:53.049716 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.049726 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:53.049734 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:53.049807 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:53.082171 1147424 cri.go:89] found id: ""
	I0731 21:31:53.082217 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.082228 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:53.082237 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:53.082308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:53.114595 1147424 cri.go:89] found id: ""
	I0731 21:31:53.114640 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.114658 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:53.114667 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:53.114739 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:53.151612 1147424 cri.go:89] found id: ""
	I0731 21:31:53.151644 1147424 logs.go:276] 0 containers: []
	W0731 21:31:53.151672 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:53.151686 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:53.151702 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:53.203251 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:53.203293 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:53.219234 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:53.219272 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:53.290273 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:53.290292 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:53.290306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:53.367967 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:53.368023 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:51.443058 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.943272 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:53.141370 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.638859 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.264069 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.265286 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:55.909173 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:55.922278 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:55.922351 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:55.959354 1147424 cri.go:89] found id: ""
	I0731 21:31:55.959389 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.959397 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:55.959403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:55.959467 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:55.998507 1147424 cri.go:89] found id: ""
	I0731 21:31:55.998544 1147424 logs.go:276] 0 containers: []
	W0731 21:31:55.998557 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:55.998566 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:55.998638 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:56.034763 1147424 cri.go:89] found id: ""
	I0731 21:31:56.034811 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.034824 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:56.034833 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:56.034914 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:56.068685 1147424 cri.go:89] found id: ""
	I0731 21:31:56.068726 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.068737 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:56.068746 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:56.068833 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:56.105785 1147424 cri.go:89] found id: ""
	I0731 21:31:56.105824 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.105837 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:56.105845 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:56.105920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:56.142701 1147424 cri.go:89] found id: ""
	I0731 21:31:56.142732 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.142744 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:56.142752 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:56.142834 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:56.177016 1147424 cri.go:89] found id: ""
	I0731 21:31:56.177064 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.177077 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:56.177089 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:56.177163 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:56.211989 1147424 cri.go:89] found id: ""
	I0731 21:31:56.212026 1147424 logs.go:276] 0 containers: []
	W0731 21:31:56.212038 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:56.212052 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:56.212070 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:56.263995 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:56.264045 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:56.277535 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:56.277570 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:56.343150 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:56.343179 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:56.343199 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:56.425361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:56.425406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:58.965276 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:31:58.978115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:31:58.978190 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:31:59.011793 1147424 cri.go:89] found id: ""
	I0731 21:31:59.011829 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.011840 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:31:59.011849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:31:59.011921 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:31:59.048117 1147424 cri.go:89] found id: ""
	I0731 21:31:59.048153 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.048164 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:31:59.048172 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:31:59.048240 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:31:59.081955 1147424 cri.go:89] found id: ""
	I0731 21:31:59.081985 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.081996 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:31:59.082004 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:31:59.082072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:31:59.116269 1147424 cri.go:89] found id: ""
	I0731 21:31:59.116308 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.116321 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:31:59.116330 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:31:59.116396 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:31:59.152551 1147424 cri.go:89] found id: ""
	I0731 21:31:59.152580 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.152592 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:31:59.152599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:31:59.152669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:31:59.186708 1147424 cri.go:89] found id: ""
	I0731 21:31:59.186749 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.186758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:31:59.186764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:31:59.186830 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:31:59.223628 1147424 cri.go:89] found id: ""
	I0731 21:31:59.223681 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.223690 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:31:59.223698 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:31:59.223773 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:31:59.256867 1147424 cri.go:89] found id: ""
	I0731 21:31:59.256901 1147424 logs.go:276] 0 containers: []
	W0731 21:31:59.256913 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:31:59.256925 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:31:59.256944 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:31:59.307167 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:31:59.307209 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:31:59.320958 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:31:59.320992 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:31:59.390776 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:31:59.390798 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:31:59.390813 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:31:59.467482 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:31:59.467534 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:31:56.445461 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:58.943434 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:57.639271 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:00.139778 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:31:59.764344 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:01.765157 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:04.264512 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.005084 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:02.017546 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:02.017635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:02.053094 1147424 cri.go:89] found id: ""
	I0731 21:32:02.053135 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.053146 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:02.053155 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:02.053212 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:02.087483 1147424 cri.go:89] found id: ""
	I0731 21:32:02.087517 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.087535 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:02.087543 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:02.087600 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:02.123647 1147424 cri.go:89] found id: ""
	I0731 21:32:02.123685 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.123696 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:02.123706 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:02.123764 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:02.157798 1147424 cri.go:89] found id: ""
	I0731 21:32:02.157828 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.157837 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:02.157843 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:02.157899 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:02.190266 1147424 cri.go:89] found id: ""
	I0731 21:32:02.190297 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.190309 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:02.190318 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:02.190377 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:02.232507 1147424 cri.go:89] found id: ""
	I0731 21:32:02.232537 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.232546 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:02.232552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:02.232605 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:02.270105 1147424 cri.go:89] found id: ""
	I0731 21:32:02.270133 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.270144 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:02.270152 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:02.270221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:02.304599 1147424 cri.go:89] found id: ""
	I0731 21:32:02.304631 1147424 logs.go:276] 0 containers: []
	W0731 21:32:02.304642 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:02.304654 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:02.304671 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:02.356686 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:02.356727 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:02.370114 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:02.370147 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:02.437753 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:02.437778 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:02.437797 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:02.518085 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:02.518131 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:01.443142 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:03.943209 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:02.640855 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.141191 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:06.265050 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.265389 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:05.071289 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:05.084496 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:05.084579 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:05.124178 1147424 cri.go:89] found id: ""
	I0731 21:32:05.124208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.124218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:05.124224 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:05.124279 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:05.162119 1147424 cri.go:89] found id: ""
	I0731 21:32:05.162155 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.162167 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:05.162173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:05.162237 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:05.198445 1147424 cri.go:89] found id: ""
	I0731 21:32:05.198483 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.198496 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:05.198504 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:05.198615 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:05.240678 1147424 cri.go:89] found id: ""
	I0731 21:32:05.240702 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.240711 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:05.240718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:05.240770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:05.276910 1147424 cri.go:89] found id: ""
	I0731 21:32:05.276942 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.276965 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:05.276974 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:05.277051 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:05.310130 1147424 cri.go:89] found id: ""
	I0731 21:32:05.310158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.310166 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:05.310173 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:05.310227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:05.345144 1147424 cri.go:89] found id: ""
	I0731 21:32:05.345179 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.345191 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:05.345199 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:05.345267 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:05.386723 1147424 cri.go:89] found id: ""
	I0731 21:32:05.386766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:05.386778 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:05.386792 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:05.386809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:05.425852 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:05.425887 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:05.482401 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:05.482447 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:05.495888 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:05.495918 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:05.562121 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:05.562153 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:05.562174 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.140837 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:08.153503 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:08.153585 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:08.187113 1147424 cri.go:89] found id: ""
	I0731 21:32:08.187143 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.187155 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:08.187164 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:08.187226 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:08.219853 1147424 cri.go:89] found id: ""
	I0731 21:32:08.219888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.219898 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:08.219906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:08.219976 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:08.253817 1147424 cri.go:89] found id: ""
	I0731 21:32:08.253848 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.253857 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:08.253864 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:08.253930 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:08.307069 1147424 cri.go:89] found id: ""
	I0731 21:32:08.307096 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.307104 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:08.307111 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:08.307176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:08.349604 1147424 cri.go:89] found id: ""
	I0731 21:32:08.349632 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.349641 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:08.349648 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:08.349711 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:08.382966 1147424 cri.go:89] found id: ""
	I0731 21:32:08.383000 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.383013 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:08.383022 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:08.383080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:08.416904 1147424 cri.go:89] found id: ""
	I0731 21:32:08.416938 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.416950 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:08.416958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:08.417021 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:08.451024 1147424 cri.go:89] found id: ""
	I0731 21:32:08.451061 1147424 logs.go:276] 0 containers: []
	W0731 21:32:08.451074 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:08.451087 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:08.451103 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:08.530394 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:08.530441 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:08.567554 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:08.567583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:08.616162 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:08.616208 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:08.629228 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:08.629264 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:08.700820 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:06.441762 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:08.443004 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.942870 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:07.638970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.139278 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:10.764866 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:13.265303 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:11.201091 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:11.213847 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:11.213920 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:11.248925 1147424 cri.go:89] found id: ""
	I0731 21:32:11.248963 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.248974 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:11.248982 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:11.249054 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:11.286134 1147424 cri.go:89] found id: ""
	I0731 21:32:11.286168 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.286185 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:11.286193 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:11.286261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:11.321493 1147424 cri.go:89] found id: ""
	I0731 21:32:11.321524 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.321534 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:11.321542 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:11.321610 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:11.356679 1147424 cri.go:89] found id: ""
	I0731 21:32:11.356708 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.356724 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:11.356731 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:11.356788 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:11.390757 1147424 cri.go:89] found id: ""
	I0731 21:32:11.390785 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.390795 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:11.390802 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:11.390868 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:11.424687 1147424 cri.go:89] found id: ""
	I0731 21:32:11.424724 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.424736 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:11.424745 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:11.424816 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:11.458542 1147424 cri.go:89] found id: ""
	I0731 21:32:11.458579 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.458590 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:11.458599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:11.458678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:11.490956 1147424 cri.go:89] found id: ""
	I0731 21:32:11.490999 1147424 logs.go:276] 0 containers: []
	W0731 21:32:11.491009 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:11.491020 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:11.491036 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:11.541013 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:11.541057 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:11.554729 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:11.554760 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:11.619828 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:11.619868 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:11.619894 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:11.697785 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:11.697837 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.235153 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:14.247701 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:14.247770 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:14.282802 1147424 cri.go:89] found id: ""
	I0731 21:32:14.282835 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.282846 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:14.282854 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:14.282926 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:14.316106 1147424 cri.go:89] found id: ""
	I0731 21:32:14.316158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.316168 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:14.316175 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:14.316235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:14.349319 1147424 cri.go:89] found id: ""
	I0731 21:32:14.349358 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.349370 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:14.349379 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:14.349446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:14.385630 1147424 cri.go:89] found id: ""
	I0731 21:32:14.385665 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.385674 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:14.385681 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:14.385745 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:14.422054 1147424 cri.go:89] found id: ""
	I0731 21:32:14.422090 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.422104 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:14.422113 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:14.422176 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:14.456170 1147424 cri.go:89] found id: ""
	I0731 21:32:14.456207 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.456216 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:14.456223 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:14.456283 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:14.489571 1147424 cri.go:89] found id: ""
	I0731 21:32:14.489611 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.489622 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:14.489632 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:14.489709 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:14.524764 1147424 cri.go:89] found id: ""
	I0731 21:32:14.524803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:14.524814 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:14.524827 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:14.524843 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:14.598487 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:14.598511 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:14.598526 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:14.675912 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:14.675954 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:14.722740 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:14.722778 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:14.780558 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:14.780604 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:13.441757 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.442992 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:12.140024 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:14.638468 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:16.639109 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:15.764963 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.265010 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:17.300221 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:17.313242 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:17.313309 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:17.349244 1147424 cri.go:89] found id: ""
	I0731 21:32:17.349276 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.349284 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:17.349293 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:17.349364 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:17.382158 1147424 cri.go:89] found id: ""
	I0731 21:32:17.382188 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.382196 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:17.382203 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:17.382276 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:17.416250 1147424 cri.go:89] found id: ""
	I0731 21:32:17.416283 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.416295 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:17.416304 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:17.416363 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:17.449192 1147424 cri.go:89] found id: ""
	I0731 21:32:17.449229 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.449240 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:17.449249 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:17.449316 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:17.482189 1147424 cri.go:89] found id: ""
	I0731 21:32:17.482223 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.482235 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:17.482244 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:17.482308 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:17.516284 1147424 cri.go:89] found id: ""
	I0731 21:32:17.516312 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.516320 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:17.516327 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:17.516380 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:17.550025 1147424 cri.go:89] found id: ""
	I0731 21:32:17.550059 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.550070 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:17.550077 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:17.550142 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:17.582378 1147424 cri.go:89] found id: ""
	I0731 21:32:17.582411 1147424 logs.go:276] 0 containers: []
	W0731 21:32:17.582424 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:17.582488 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:17.582513 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:17.635593 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:17.635640 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:17.649694 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:17.649734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:17.716275 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:17.716301 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:17.716316 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:17.800261 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:17.800327 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:17.942859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:19.943179 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:18.639313 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.639947 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.265670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:22.764461 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:20.339222 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:20.353494 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:20.353574 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:20.387397 1147424 cri.go:89] found id: ""
	I0731 21:32:20.387432 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.387441 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:20.387449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:20.387534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:20.421038 1147424 cri.go:89] found id: ""
	I0731 21:32:20.421074 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.421082 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:20.421088 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:20.421200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:20.461171 1147424 cri.go:89] found id: ""
	I0731 21:32:20.461208 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.461221 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:20.461229 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:20.461297 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:20.529655 1147424 cri.go:89] found id: ""
	I0731 21:32:20.529692 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.529704 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:20.529712 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:20.529779 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:20.584293 1147424 cri.go:89] found id: ""
	I0731 21:32:20.584327 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.584337 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:20.584344 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:20.584399 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:20.617177 1147424 cri.go:89] found id: ""
	I0731 21:32:20.617209 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.617220 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:20.617226 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:20.617282 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:20.657058 1147424 cri.go:89] found id: ""
	I0731 21:32:20.657094 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.657104 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:20.657112 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:20.657181 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:20.689987 1147424 cri.go:89] found id: ""
	I0731 21:32:20.690016 1147424 logs.go:276] 0 containers: []
	W0731 21:32:20.690026 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:20.690038 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:20.690058 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:20.702274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:20.702310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:20.766054 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:20.766088 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:20.766106 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:20.850776 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:20.850823 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:20.888735 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:20.888766 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.440658 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:23.453529 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:23.453616 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:23.487210 1147424 cri.go:89] found id: ""
	I0731 21:32:23.487249 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.487263 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:23.487271 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:23.487338 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:23.520656 1147424 cri.go:89] found id: ""
	I0731 21:32:23.520697 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.520709 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:23.520718 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:23.520794 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:23.557952 1147424 cri.go:89] found id: ""
	I0731 21:32:23.557982 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.557991 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:23.557999 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:23.558052 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:23.591428 1147424 cri.go:89] found id: ""
	I0731 21:32:23.591458 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.591466 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:23.591473 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:23.591537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:23.624978 1147424 cri.go:89] found id: ""
	I0731 21:32:23.625009 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.625019 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:23.625026 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:23.625080 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:23.659424 1147424 cri.go:89] found id: ""
	I0731 21:32:23.659460 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.659473 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:23.659482 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:23.659557 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:23.696695 1147424 cri.go:89] found id: ""
	I0731 21:32:23.696733 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.696745 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:23.696753 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:23.696818 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:23.734067 1147424 cri.go:89] found id: ""
	I0731 21:32:23.734097 1147424 logs.go:276] 0 containers: []
	W0731 21:32:23.734106 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:23.734116 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:23.734130 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:23.787432 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:23.787476 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:23.801116 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:23.801154 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:23.867801 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:23.867840 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:23.867859 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:23.952393 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:23.952435 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:22.442859 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:24.943043 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:23.139590 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.140770 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:25.264790 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.763670 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:26.490759 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:26.503050 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:26.503120 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:26.536191 1147424 cri.go:89] found id: ""
	I0731 21:32:26.536239 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.536251 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:26.536260 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:26.536330 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:26.571038 1147424 cri.go:89] found id: ""
	I0731 21:32:26.571075 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.571088 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:26.571096 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:26.571164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:26.605295 1147424 cri.go:89] found id: ""
	I0731 21:32:26.605333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.605346 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:26.605355 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:26.605422 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:26.644430 1147424 cri.go:89] found id: ""
	I0731 21:32:26.644472 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.644482 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:26.644489 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:26.644553 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:26.675985 1147424 cri.go:89] found id: ""
	I0731 21:32:26.676020 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.676033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:26.676041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:26.676128 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:26.707738 1147424 cri.go:89] found id: ""
	I0731 21:32:26.707766 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.707780 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:26.707787 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:26.707850 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:26.743969 1147424 cri.go:89] found id: ""
	I0731 21:32:26.743998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.744007 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:26.744013 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:26.744067 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:26.782301 1147424 cri.go:89] found id: ""
	I0731 21:32:26.782333 1147424 logs.go:276] 0 containers: []
	W0731 21:32:26.782346 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:26.782361 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:26.782377 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:26.818548 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:26.818580 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.870586 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:26.870632 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:26.883944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:26.883983 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:26.951603 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:26.951630 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:26.951648 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:29.527796 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:29.540627 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:29.540862 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:29.575513 1147424 cri.go:89] found id: ""
	I0731 21:32:29.575544 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.575553 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:29.575559 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:29.575627 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:29.607395 1147424 cri.go:89] found id: ""
	I0731 21:32:29.607425 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.607434 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:29.607440 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:29.607505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:29.641509 1147424 cri.go:89] found id: ""
	I0731 21:32:29.641539 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.641548 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:29.641553 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:29.641604 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:29.673166 1147424 cri.go:89] found id: ""
	I0731 21:32:29.673197 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.673207 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:29.673215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:29.673285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:29.703698 1147424 cri.go:89] found id: ""
	I0731 21:32:29.703744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.703752 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:29.703759 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:29.703821 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:29.738704 1147424 cri.go:89] found id: ""
	I0731 21:32:29.738746 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.738758 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:29.738767 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:29.738858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:29.771359 1147424 cri.go:89] found id: ""
	I0731 21:32:29.771388 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.771399 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:29.771407 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:29.771474 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:29.806579 1147424 cri.go:89] found id: ""
	I0731 21:32:29.806614 1147424 logs.go:276] 0 containers: []
	W0731 21:32:29.806625 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:29.806635 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:29.806649 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:26.943079 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.442599 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:27.638623 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.639949 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.764393 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:31.764649 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.764888 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:29.857957 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:29.857994 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:29.871348 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:29.871387 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:29.942833 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:29.942864 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:29.942880 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:30.027254 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:30.027306 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:32.565077 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:32.577796 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:32.577878 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:32.611725 1147424 cri.go:89] found id: ""
	I0731 21:32:32.611762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.611774 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:32.611783 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:32.611859 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:32.647901 1147424 cri.go:89] found id: ""
	I0731 21:32:32.647939 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.647951 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:32.647959 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:32.648018 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:32.681042 1147424 cri.go:89] found id: ""
	I0731 21:32:32.681073 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.681084 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:32.681091 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:32.681162 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:32.716141 1147424 cri.go:89] found id: ""
	I0731 21:32:32.716173 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.716182 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:32.716188 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:32.716242 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:32.753207 1147424 cri.go:89] found id: ""
	I0731 21:32:32.753236 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.753244 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:32.753250 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:32.753301 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:32.787591 1147424 cri.go:89] found id: ""
	I0731 21:32:32.787619 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.787628 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:32.787635 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:32.787717 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:32.822430 1147424 cri.go:89] found id: ""
	I0731 21:32:32.822464 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.822476 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:32.822484 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:32.822544 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:32.854566 1147424 cri.go:89] found id: ""
	I0731 21:32:32.854600 1147424 logs.go:276] 0 containers: []
	W0731 21:32:32.854609 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:32.854621 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:32.854636 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:32.905256 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:32.905310 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:32.918575 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:32.918607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:32.981644 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:32.981669 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:32.981685 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:33.062767 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:33.062814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:31.443380 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:33.942793 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.943502 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:32.139483 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:34.140185 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.638720 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:36.264481 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.265008 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:35.599598 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:35.612328 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:35.612403 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:35.647395 1147424 cri.go:89] found id: ""
	I0731 21:32:35.647428 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.647439 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:35.647448 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:35.647514 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:35.682339 1147424 cri.go:89] found id: ""
	I0731 21:32:35.682370 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.682378 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:35.682384 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:35.682440 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:35.721727 1147424 cri.go:89] found id: ""
	I0731 21:32:35.721762 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.721775 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:35.721784 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:35.721866 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:35.754648 1147424 cri.go:89] found id: ""
	I0731 21:32:35.754678 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.754688 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:35.754697 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:35.754761 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:35.787880 1147424 cri.go:89] found id: ""
	I0731 21:32:35.787910 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.787922 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:35.787930 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:35.788004 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:35.822619 1147424 cri.go:89] found id: ""
	I0731 21:32:35.822656 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.822668 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:35.822677 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:35.822743 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:35.856160 1147424 cri.go:89] found id: ""
	I0731 21:32:35.856198 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.856210 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:35.856219 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:35.856284 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:35.888842 1147424 cri.go:89] found id: ""
	I0731 21:32:35.888881 1147424 logs.go:276] 0 containers: []
	W0731 21:32:35.888893 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:35.888906 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:35.888924 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:35.956296 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:35.956323 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:35.956342 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:36.039485 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:36.039531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:36.081202 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:36.081247 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:36.130789 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:36.130831 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:38.647723 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:38.660334 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:38.660405 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:38.696782 1147424 cri.go:89] found id: ""
	I0731 21:32:38.696813 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.696822 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:38.696828 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:38.696887 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:38.731835 1147424 cri.go:89] found id: ""
	I0731 21:32:38.731874 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.731887 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:38.731895 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:38.731969 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:38.768894 1147424 cri.go:89] found id: ""
	I0731 21:32:38.768924 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.768935 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:38.768943 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:38.769012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:38.802331 1147424 cri.go:89] found id: ""
	I0731 21:32:38.802361 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.802370 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:38.802377 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:38.802430 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:38.835822 1147424 cri.go:89] found id: ""
	I0731 21:32:38.835852 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.835864 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:38.835881 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:38.835940 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:38.869104 1147424 cri.go:89] found id: ""
	I0731 21:32:38.869141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.869153 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:38.869162 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:38.869234 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:38.907732 1147424 cri.go:89] found id: ""
	I0731 21:32:38.907769 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.907781 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:38.907789 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:38.907858 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:38.942961 1147424 cri.go:89] found id: ""
	I0731 21:32:38.942994 1147424 logs.go:276] 0 containers: []
	W0731 21:32:38.943005 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:38.943017 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:38.943032 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:38.997537 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:38.997584 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:39.011711 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:39.011745 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:39.082834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:39.082861 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:39.082878 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:39.168702 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:39.168758 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:38.442196 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.943085 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:38.639586 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.140158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:40.764887 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.265118 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:41.706713 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:41.720209 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:41.720298 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:41.752969 1147424 cri.go:89] found id: ""
	I0731 21:32:41.753005 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.753016 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:41.753025 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:41.753095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:41.786502 1147424 cri.go:89] found id: ""
	I0731 21:32:41.786542 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.786555 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:41.786564 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:41.786635 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:41.819958 1147424 cri.go:89] found id: ""
	I0731 21:32:41.819989 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.820000 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:41.820008 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:41.820073 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:41.855104 1147424 cri.go:89] found id: ""
	I0731 21:32:41.855141 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.855153 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:41.855161 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:41.855228 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:41.889375 1147424 cri.go:89] found id: ""
	I0731 21:32:41.889413 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.889423 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:41.889429 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:41.889505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:41.925172 1147424 cri.go:89] found id: ""
	I0731 21:32:41.925199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.925208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:41.925215 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:41.925278 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:41.960951 1147424 cri.go:89] found id: ""
	I0731 21:32:41.960995 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.961009 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:41.961017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:41.961086 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:41.996458 1147424 cri.go:89] found id: ""
	I0731 21:32:41.996493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:41.996506 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:41.996519 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:41.996537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:42.048841 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:42.048889 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:42.062235 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:42.062271 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:42.131510 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:42.131536 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:42.131551 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:42.216993 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:42.217035 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:44.756236 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:44.769719 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:44.769800 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:44.808963 1147424 cri.go:89] found id: ""
	I0731 21:32:44.808998 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.809009 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:44.809017 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:44.809095 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:44.843163 1147424 cri.go:89] found id: ""
	I0731 21:32:44.843199 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.843212 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:44.843225 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:44.843287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:42.943536 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.443141 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:43.140264 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.140607 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:45.764757 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.765226 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:44.877440 1147424 cri.go:89] found id: ""
	I0731 21:32:44.877468 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.877477 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:44.877483 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:44.877537 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:44.911877 1147424 cri.go:89] found id: ""
	I0731 21:32:44.911906 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.911915 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:44.911922 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:44.911974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:44.945516 1147424 cri.go:89] found id: ""
	I0731 21:32:44.945547 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.945558 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:44.945565 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:44.945634 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:44.983858 1147424 cri.go:89] found id: ""
	I0731 21:32:44.983890 1147424 logs.go:276] 0 containers: []
	W0731 21:32:44.983898 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:44.983906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:44.983981 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:45.017030 1147424 cri.go:89] found id: ""
	I0731 21:32:45.017064 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.017075 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:45.017084 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:45.017154 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:45.051005 1147424 cri.go:89] found id: ""
	I0731 21:32:45.051040 1147424 logs.go:276] 0 containers: []
	W0731 21:32:45.051053 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:45.051064 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:45.051077 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:45.100602 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:45.100646 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:45.113843 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:45.113891 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:45.187725 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:45.187760 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:45.187779 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:45.273549 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:45.273588 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:47.813567 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:47.826674 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:47.826762 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:47.863746 1147424 cri.go:89] found id: ""
	I0731 21:32:47.863781 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.863789 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:47.863797 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:47.863860 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:47.901125 1147424 cri.go:89] found id: ""
	I0731 21:32:47.901158 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.901169 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:47.901177 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:47.901247 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:47.936510 1147424 cri.go:89] found id: ""
	I0731 21:32:47.936543 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.936553 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:47.936560 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:47.936618 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:47.972712 1147424 cri.go:89] found id: ""
	I0731 21:32:47.972744 1147424 logs.go:276] 0 containers: []
	W0731 21:32:47.972754 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:47.972764 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:47.972828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:48.007785 1147424 cri.go:89] found id: ""
	I0731 21:32:48.007818 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.007831 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:48.007839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:48.007907 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:48.045821 1147424 cri.go:89] found id: ""
	I0731 21:32:48.045851 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.045863 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:48.045872 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:48.045945 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:48.083790 1147424 cri.go:89] found id: ""
	I0731 21:32:48.083823 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.083832 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:48.083839 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:48.083903 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:48.122430 1147424 cri.go:89] found id: ""
	I0731 21:32:48.122465 1147424 logs.go:276] 0 containers: []
	W0731 21:32:48.122477 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:48.122490 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:48.122505 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:48.200081 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:48.200140 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:48.240500 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:48.240537 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:48.292336 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:48.292393 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:48.305398 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:48.305431 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:48.381327 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:47.943158 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.945740 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:47.638897 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:49.640039 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.269263 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.765262 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:50.881554 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:50.894655 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:50.894740 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:50.928819 1147424 cri.go:89] found id: ""
	I0731 21:32:50.928861 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.928873 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:50.928882 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:50.928950 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:50.962856 1147424 cri.go:89] found id: ""
	I0731 21:32:50.962897 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.962908 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:50.962917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:50.962980 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:50.995765 1147424 cri.go:89] found id: ""
	I0731 21:32:50.995803 1147424 logs.go:276] 0 containers: []
	W0731 21:32:50.995815 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:50.995823 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:50.995892 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:51.034418 1147424 cri.go:89] found id: ""
	I0731 21:32:51.034454 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.034467 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:51.034476 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:51.034534 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:51.070687 1147424 cri.go:89] found id: ""
	I0731 21:32:51.070723 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.070732 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:51.070739 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:51.070828 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:51.106934 1147424 cri.go:89] found id: ""
	I0731 21:32:51.106959 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.106966 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:51.106973 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:51.107026 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:51.143489 1147424 cri.go:89] found id: ""
	I0731 21:32:51.143513 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.143522 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:51.143530 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:51.143591 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:51.180778 1147424 cri.go:89] found id: ""
	I0731 21:32:51.180806 1147424 logs.go:276] 0 containers: []
	W0731 21:32:51.180816 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:51.180827 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:51.180842 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:51.194695 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:51.194734 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:51.262172 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:51.262200 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:51.262220 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:51.344678 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:51.344719 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:51.383624 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:51.383659 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:53.936339 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:53.950362 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:53.950446 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:53.984346 1147424 cri.go:89] found id: ""
	I0731 21:32:53.984376 1147424 logs.go:276] 0 containers: []
	W0731 21:32:53.984391 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:53.984403 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:53.984481 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:54.019937 1147424 cri.go:89] found id: ""
	I0731 21:32:54.019973 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.019986 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:54.019994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:54.020070 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:54.056068 1147424 cri.go:89] found id: ""
	I0731 21:32:54.056120 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.056133 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:54.056142 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:54.056221 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:54.094375 1147424 cri.go:89] found id: ""
	I0731 21:32:54.094407 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.094416 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:54.094422 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:54.094486 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:54.130326 1147424 cri.go:89] found id: ""
	I0731 21:32:54.130362 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.130374 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:54.130383 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:54.130444 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:54.168190 1147424 cri.go:89] found id: ""
	I0731 21:32:54.168228 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.168239 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:54.168248 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:54.168329 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:54.201946 1147424 cri.go:89] found id: ""
	I0731 21:32:54.201979 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.201988 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:54.201994 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:54.202055 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:54.233852 1147424 cri.go:89] found id: ""
	I0731 21:32:54.233888 1147424 logs.go:276] 0 containers: []
	W0731 21:32:54.233896 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:54.233907 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:54.233922 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:54.287620 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:54.287664 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:54.309984 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:54.310019 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:54.382751 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:54.382774 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:54.382789 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:54.460042 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:54.460105 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:52.443844 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.943970 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:52.140449 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:54.141072 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:56.639439 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:55.264301 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.265478 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:57.002945 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:32:57.015673 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:32:57.015763 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:32:57.049464 1147424 cri.go:89] found id: ""
	I0731 21:32:57.049493 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.049502 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:32:57.049509 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:32:57.049561 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:32:57.083326 1147424 cri.go:89] found id: ""
	I0731 21:32:57.083356 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.083365 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:32:57.083371 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:32:57.083431 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:32:57.115103 1147424 cri.go:89] found id: ""
	I0731 21:32:57.115132 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.115141 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:32:57.115147 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:32:57.115200 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:32:57.153178 1147424 cri.go:89] found id: ""
	I0731 21:32:57.153214 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.153226 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:32:57.153234 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:32:57.153310 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:32:57.187940 1147424 cri.go:89] found id: ""
	I0731 21:32:57.187980 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.187992 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:32:57.188001 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:32:57.188072 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:32:57.221825 1147424 cri.go:89] found id: ""
	I0731 21:32:57.221858 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.221868 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:32:57.221884 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:32:57.221948 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:32:57.255087 1147424 cri.go:89] found id: ""
	I0731 21:32:57.255115 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.255128 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:32:57.255137 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:32:57.255207 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:32:57.290095 1147424 cri.go:89] found id: ""
	I0731 21:32:57.290131 1147424 logs.go:276] 0 containers: []
	W0731 21:32:57.290143 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:32:57.290157 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:32:57.290175 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:32:57.343777 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:32:57.343819 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:32:57.356944 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:32:57.356981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:32:57.431220 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:32:57.431248 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:32:57.431267 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:32:57.518079 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:32:57.518123 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:32:57.442671 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.942490 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:58.639801 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.139266 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:32:59.764738 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:01.765367 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:04.265447 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:00.056208 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:00.069424 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:00.069511 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:00.105855 1147424 cri.go:89] found id: ""
	I0731 21:33:00.105891 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.105902 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:00.105909 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:00.105984 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:00.143079 1147424 cri.go:89] found id: ""
	I0731 21:33:00.143109 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.143120 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:00.143128 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:00.143195 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:00.178114 1147424 cri.go:89] found id: ""
	I0731 21:33:00.178150 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.178162 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:00.178171 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:00.178235 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:00.212518 1147424 cri.go:89] found id: ""
	I0731 21:33:00.212547 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.212556 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:00.212562 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:00.212626 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:00.246653 1147424 cri.go:89] found id: ""
	I0731 21:33:00.246683 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.246693 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:00.246702 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:00.246795 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:00.280163 1147424 cri.go:89] found id: ""
	I0731 21:33:00.280196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.280208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:00.280216 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:00.280285 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:00.313593 1147424 cri.go:89] found id: ""
	I0731 21:33:00.313622 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.313631 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:00.313637 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:00.313691 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:00.347809 1147424 cri.go:89] found id: ""
	I0731 21:33:00.347838 1147424 logs.go:276] 0 containers: []
	W0731 21:33:00.347846 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:00.347858 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:00.347870 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:00.360481 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:00.360515 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:00.433834 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:00.433855 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:00.433869 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:00.513679 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:00.513721 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:00.551415 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:00.551466 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.101928 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:03.114183 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:03.114262 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:03.152397 1147424 cri.go:89] found id: ""
	I0731 21:33:03.152427 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.152442 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:03.152449 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:03.152505 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:03.186595 1147424 cri.go:89] found id: ""
	I0731 21:33:03.186626 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.186640 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:03.186647 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:03.186700 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:03.219085 1147424 cri.go:89] found id: ""
	I0731 21:33:03.219116 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.219126 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:03.219135 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:03.219201 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:03.251541 1147424 cri.go:89] found id: ""
	I0731 21:33:03.251573 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.251583 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:03.251592 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:03.251660 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:03.287880 1147424 cri.go:89] found id: ""
	I0731 21:33:03.287911 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.287920 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:03.287927 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:03.287992 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:03.320317 1147424 cri.go:89] found id: ""
	I0731 21:33:03.320352 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.320361 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:03.320367 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:03.320423 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:03.355185 1147424 cri.go:89] found id: ""
	I0731 21:33:03.355213 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.355222 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:03.355228 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:03.355281 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:03.389900 1147424 cri.go:89] found id: ""
	I0731 21:33:03.389933 1147424 logs.go:276] 0 containers: []
	W0731 21:33:03.389941 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:03.389951 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:03.389985 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:03.427299 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:03.427331 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:03.480994 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:03.481037 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:03.494372 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:03.494403 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:03.565542 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:03.565568 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:03.565583 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:01.942941 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.943391 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:03.140871 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:05.141254 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.764762 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.264188 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:06.146397 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:06.159705 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:06.159791 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:06.195594 1147424 cri.go:89] found id: ""
	I0731 21:33:06.195628 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.195640 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:06.195649 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:06.195726 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:06.230163 1147424 cri.go:89] found id: ""
	I0731 21:33:06.230216 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.230229 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:06.230239 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:06.230313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:06.266937 1147424 cri.go:89] found id: ""
	I0731 21:33:06.266968 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.266979 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:06.266986 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:06.267048 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:06.299791 1147424 cri.go:89] found id: ""
	I0731 21:33:06.299828 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.299838 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:06.299849 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:06.299906 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:06.333861 1147424 cri.go:89] found id: ""
	I0731 21:33:06.333900 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.333912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:06.333920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:06.333991 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:06.366156 1147424 cri.go:89] found id: ""
	I0731 21:33:06.366196 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.366208 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:06.366217 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:06.366292 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:06.400567 1147424 cri.go:89] found id: ""
	I0731 21:33:06.400598 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.400607 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:06.400613 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:06.400665 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:06.443745 1147424 cri.go:89] found id: ""
	I0731 21:33:06.443771 1147424 logs.go:276] 0 containers: []
	W0731 21:33:06.443782 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:06.443794 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:06.443809 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:06.530140 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:06.530189 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.570842 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:06.570883 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:06.621760 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:06.621800 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:06.636562 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:06.636602 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:06.702451 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.203607 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:09.215590 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:09.215678 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:09.253063 1147424 cri.go:89] found id: ""
	I0731 21:33:09.253092 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.253101 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:09.253108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:09.253159 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:09.287000 1147424 cri.go:89] found id: ""
	I0731 21:33:09.287036 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.287051 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:09.287060 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:09.287117 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:09.321173 1147424 cri.go:89] found id: ""
	I0731 21:33:09.321211 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.321223 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:09.321232 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:09.321287 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:09.356860 1147424 cri.go:89] found id: ""
	I0731 21:33:09.356896 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.356908 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:09.356918 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:09.356979 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:09.390469 1147424 cri.go:89] found id: ""
	I0731 21:33:09.390509 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.390520 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:09.390528 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:09.390601 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:09.426265 1147424 cri.go:89] found id: ""
	I0731 21:33:09.426295 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.426304 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:09.426311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:09.426376 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:09.460197 1147424 cri.go:89] found id: ""
	I0731 21:33:09.460234 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.460246 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:09.460254 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:09.460313 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:09.492708 1147424 cri.go:89] found id: ""
	I0731 21:33:09.492737 1147424 logs.go:276] 0 containers: []
	W0731 21:33:09.492745 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:09.492757 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:09.492769 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:09.543768 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:09.543814 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:09.557496 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:09.557531 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:09.622956 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:09.622994 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:09.623012 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:09.700157 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:09.700202 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:06.443888 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:08.942866 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:07.638676 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:09.639158 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.639719 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:11.264932 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.763994 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:12.238767 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:12.258742 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:12.258829 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:12.319452 1147424 cri.go:89] found id: ""
	I0731 21:33:12.319501 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.319514 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:12.319523 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:12.319596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:12.353740 1147424 cri.go:89] found id: ""
	I0731 21:33:12.353777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.353789 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:12.353798 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:12.353872 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:12.387735 1147424 cri.go:89] found id: ""
	I0731 21:33:12.387777 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.387790 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:12.387799 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:12.387864 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:12.420145 1147424 cri.go:89] found id: ""
	I0731 21:33:12.420184 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.420196 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:12.420204 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:12.420261 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:12.454861 1147424 cri.go:89] found id: ""
	I0731 21:33:12.454899 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.454912 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:12.454920 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:12.454993 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:12.487910 1147424 cri.go:89] found id: ""
	I0731 21:33:12.487938 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.487946 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:12.487954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:12.488007 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:12.524634 1147424 cri.go:89] found id: ""
	I0731 21:33:12.524663 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.524672 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:12.524678 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:12.524747 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:12.557542 1147424 cri.go:89] found id: ""
	I0731 21:33:12.557572 1147424 logs.go:276] 0 containers: []
	W0731 21:33:12.557581 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:12.557592 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:12.557605 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:12.638725 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:12.638767 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:12.675009 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:12.675041 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:12.725508 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:12.725556 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:12.739281 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:12.739315 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:12.809186 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:11.443163 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:13.942775 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.944913 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:14.140466 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:16.639513 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.764068 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:17.764557 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:15.310278 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:15.323392 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:15.323489 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:15.356737 1147424 cri.go:89] found id: ""
	I0731 21:33:15.356768 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.356779 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:15.356794 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:15.356870 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:15.389979 1147424 cri.go:89] found id: ""
	I0731 21:33:15.390018 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.390027 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:15.390033 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:15.390097 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:15.422777 1147424 cri.go:89] found id: ""
	I0731 21:33:15.422810 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.422818 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:15.422825 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:15.422880 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:15.457962 1147424 cri.go:89] found id: ""
	I0731 21:33:15.458000 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.458012 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:15.458021 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:15.458088 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:15.495495 1147424 cri.go:89] found id: ""
	I0731 21:33:15.495528 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.495539 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:15.495552 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:15.495611 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:15.528671 1147424 cri.go:89] found id: ""
	I0731 21:33:15.528700 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.528709 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:15.528715 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:15.528782 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:15.562579 1147424 cri.go:89] found id: ""
	I0731 21:33:15.562609 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.562617 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:15.562623 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:15.562688 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:15.597326 1147424 cri.go:89] found id: ""
	I0731 21:33:15.597362 1147424 logs.go:276] 0 containers: []
	W0731 21:33:15.597374 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:15.597387 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:15.597406 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:15.611017 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:15.611049 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:15.679729 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:15.679756 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:15.679776 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:15.763719 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:15.763764 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:15.801974 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:15.802003 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.350340 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:18.362952 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:18.363030 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:18.396153 1147424 cri.go:89] found id: ""
	I0731 21:33:18.396207 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.396218 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:18.396227 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:18.396300 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:18.429261 1147424 cri.go:89] found id: ""
	I0731 21:33:18.429291 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.429302 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:18.429311 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:18.429386 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:18.462056 1147424 cri.go:89] found id: ""
	I0731 21:33:18.462093 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.462105 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:18.462115 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:18.462189 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:18.494847 1147424 cri.go:89] found id: ""
	I0731 21:33:18.494887 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.494900 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:18.494908 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:18.494974 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:18.527982 1147424 cri.go:89] found id: ""
	I0731 21:33:18.528020 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.528033 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:18.528041 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:18.528137 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:18.562114 1147424 cri.go:89] found id: ""
	I0731 21:33:18.562148 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.562159 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:18.562168 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:18.562227 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:18.600226 1147424 cri.go:89] found id: ""
	I0731 21:33:18.600256 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.600267 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:18.600275 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:18.600346 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:18.635899 1147424 cri.go:89] found id: ""
	I0731 21:33:18.635935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:18.635947 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:18.635960 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:18.635976 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:18.687338 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:18.687380 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:18.700274 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:18.700308 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:18.772852 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:18.772882 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:18.772900 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:18.854876 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:18.854919 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:18.442684 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:20.942998 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.139878 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.139917 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:19.764588 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:21.765547 1147232 pod_ready.go:102] pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:22.759208 1147232 pod_ready.go:81] duration metric: took 4m0.00082409s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" ...
	E0731 21:33:22.759249 1147232 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6jkw9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:33:22.759276 1147232 pod_ready.go:38] duration metric: took 4m11.578718686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:33:22.759313 1147232 kubeadm.go:597] duration metric: took 4m19.399292481s to restartPrimaryControlPlane
	W0731 21:33:22.759429 1147232 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:22.759478 1147232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:21.392589 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:21.405646 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:21.405767 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:21.441055 1147424 cri.go:89] found id: ""
	I0731 21:33:21.441088 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.441100 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:21.441108 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:21.441173 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:21.474545 1147424 cri.go:89] found id: ""
	I0731 21:33:21.474583 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.474593 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:21.474599 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:21.474654 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:21.506004 1147424 cri.go:89] found id: ""
	I0731 21:33:21.506032 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.506041 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:21.506047 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:21.506115 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:21.539842 1147424 cri.go:89] found id: ""
	I0731 21:33:21.539880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.539893 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:21.539902 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:21.539966 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:21.573913 1147424 cri.go:89] found id: ""
	I0731 21:33:21.573943 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.573951 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:21.573958 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:21.574012 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:21.608677 1147424 cri.go:89] found id: ""
	I0731 21:33:21.608715 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.608727 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:21.608736 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:21.608811 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:21.642032 1147424 cri.go:89] found id: ""
	I0731 21:33:21.642063 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.642073 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:21.642082 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:21.642146 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:21.676279 1147424 cri.go:89] found id: ""
	I0731 21:33:21.676312 1147424 logs.go:276] 0 containers: []
	W0731 21:33:21.676322 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:21.676332 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:21.676346 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:21.688928 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:21.688981 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:33:21.757596 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:21.757620 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:21.757637 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:21.836301 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:21.836350 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:21.873553 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:21.873594 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.427756 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:24.440917 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:33:24.440998 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:33:24.475902 1147424 cri.go:89] found id: ""
	I0731 21:33:24.475935 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.475946 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:33:24.475954 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:33:24.476031 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:33:24.509078 1147424 cri.go:89] found id: ""
	I0731 21:33:24.509115 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.509128 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:33:24.509136 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:33:24.509205 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:33:24.542466 1147424 cri.go:89] found id: ""
	I0731 21:33:24.542506 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.542518 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:33:24.542527 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:33:24.542589 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:33:24.579457 1147424 cri.go:89] found id: ""
	I0731 21:33:24.579496 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.579515 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:33:24.579524 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:33:24.579596 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:33:24.623843 1147424 cri.go:89] found id: ""
	I0731 21:33:24.623880 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.623891 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:33:24.623899 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:33:24.623971 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:33:24.661401 1147424 cri.go:89] found id: ""
	I0731 21:33:24.661437 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.661448 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:33:24.661457 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:33:24.661526 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:33:24.694521 1147424 cri.go:89] found id: ""
	I0731 21:33:24.694551 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.694559 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:33:24.694567 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:33:24.694657 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:33:24.730530 1147424 cri.go:89] found id: ""
	I0731 21:33:24.730566 1147424 logs.go:276] 0 containers: []
	W0731 21:33:24.730578 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:33:24.730591 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:33:24.730607 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:33:24.801836 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:33:24.801890 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:33:24.817753 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:33:24.817803 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:33:23.444464 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.942484 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:23.140282 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:25.638870 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	W0731 21:33:24.901125 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:33:24.901154 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:33:24.901170 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:33:24.984008 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:33:24.984054 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:33:27.533575 1147424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:33:27.546174 1147424 kubeadm.go:597] duration metric: took 4m1.98040234s to restartPrimaryControlPlane
	W0731 21:33:27.546264 1147424 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:33:27.546291 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:33:28.848116 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.301779163s)
	I0731 21:33:28.848201 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:28.862706 1147424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:28.872753 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:28.882437 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:28.882467 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:28.882527 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:28.892810 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:28.892893 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:28.901944 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:28.911008 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:28.911089 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:28.920446 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.929557 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:28.929627 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:28.939095 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:28.948405 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:28.948478 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:28.958084 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:29.033876 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:33:29.033969 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:29.180061 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:29.180208 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:29.180304 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:29.352063 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:29.354698 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:29.354847 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:29.354944 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:29.355065 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:29.355151 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:29.355244 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:29.355344 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:29.355454 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:29.355562 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:29.355675 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:29.355800 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:29.355855 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:29.355906 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:29.657622 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:29.951029 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:30.025514 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:30.502515 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:30.518575 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:30.520148 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:30.520332 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:30.670223 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:27.948560 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.442457 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:28.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.139394 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:30.672807 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:33:30.672945 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:30.681152 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:30.682190 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:30.683416 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:30.688543 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:33:32.942316 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.443021 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:32.639784 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:35.139844 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.945781 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.442632 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:37.639625 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:40.139364 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.942420 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.942739 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:42.139763 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:44.639285 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:46.943777 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.442396 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:47.138913 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:49.139244 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:51.139970 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.946266 1147232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.186759545s)
	I0731 21:33:53.946372 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:33:53.960849 1147232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:33:53.971957 1147232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:33:53.981956 1147232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:33:53.981997 1147232 kubeadm.go:157] found existing configuration files:
	
	I0731 21:33:53.982061 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:33:53.991700 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:33:53.991794 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:33:54.001558 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:33:54.010863 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:33:54.010939 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:33:54.021132 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.032655 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:33:54.032745 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:33:54.042684 1147232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:33:54.052522 1147232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:33:54.052591 1147232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:33:54.062401 1147232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:33:54.110034 1147232 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:33:54.110111 1147232 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:33:54.241728 1147232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:33:54.241910 1147232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:33:54.242057 1147232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:33:54.453017 1147232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:33:54.454705 1147232 out.go:204]   - Generating certificates and keys ...
	I0731 21:33:54.454822 1147232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:33:54.459233 1147232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:33:54.459344 1147232 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:33:54.459427 1147232 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:33:54.459525 1147232 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:33:54.459612 1147232 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:33:54.459698 1147232 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:33:54.459807 1147232 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:33:54.459918 1147232 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:33:54.460026 1147232 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:33:54.460083 1147232 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:33:54.460190 1147232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:33:54.524149 1147232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:33:54.777800 1147232 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:33:54.921782 1147232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:33:55.044166 1147232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:33:55.204096 1147232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:33:55.204767 1147232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:33:55.207263 1147232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:33:51.442995 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.444424 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.944751 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:53.639209 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.639317 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:55.208851 1147232 out.go:204]   - Booting up control plane ...
	I0731 21:33:55.208977 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:33:55.209090 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:33:55.209331 1147232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:33:55.229113 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:33:55.229800 1147232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:33:55.229867 1147232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:33:55.356937 1147232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:33:55.357076 1147232 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:33:55.858979 1147232 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.083488ms
	I0731 21:33:55.859109 1147232 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:34:00.863345 1147232 kubeadm.go:310] [api-check] The API server is healthy after 5.002699171s
	I0731 21:34:00.879484 1147232 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:34:00.894019 1147232 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:34:00.928443 1147232 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:34:00.928739 1147232 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-563652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:34:00.941793 1147232 kubeadm.go:310] [bootstrap-token] Using token: zsizu4.9crnq3d9xqkkbhr5
	I0731 21:33:57.947020 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.442694 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:57.639666 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:33:59.640630 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:01.640684 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:00.943202 1147232 out.go:204]   - Configuring RBAC rules ...
	I0731 21:34:00.943358 1147232 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:34:00.951121 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:34:00.959955 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:34:00.963669 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:34:00.967795 1147232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:34:00.972804 1147232 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:34:01.271139 1147232 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:34:01.705953 1147232 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:34:02.269466 1147232 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:34:02.271800 1147232 kubeadm.go:310] 
	I0731 21:34:02.271904 1147232 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:34:02.271915 1147232 kubeadm.go:310] 
	I0731 21:34:02.271994 1147232 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:34:02.272005 1147232 kubeadm.go:310] 
	I0731 21:34:02.272040 1147232 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:34:02.272127 1147232 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:34:02.272206 1147232 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:34:02.272212 1147232 kubeadm.go:310] 
	I0731 21:34:02.272290 1147232 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:34:02.272337 1147232 kubeadm.go:310] 
	I0731 21:34:02.272453 1147232 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:34:02.272477 1147232 kubeadm.go:310] 
	I0731 21:34:02.272557 1147232 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:34:02.272644 1147232 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:34:02.272735 1147232 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:34:02.272751 1147232 kubeadm.go:310] 
	I0731 21:34:02.272871 1147232 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:34:02.272972 1147232 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:34:02.272991 1147232 kubeadm.go:310] 
	I0731 21:34:02.273097 1147232 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273207 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c \
	I0731 21:34:02.273252 1147232 kubeadm.go:310] 	--control-plane 
	I0731 21:34:02.273268 1147232 kubeadm.go:310] 
	I0731 21:34:02.273371 1147232 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:34:02.273381 1147232 kubeadm.go:310] 
	I0731 21:34:02.273492 1147232 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zsizu4.9crnq3d9xqkkbhr5 \
	I0731 21:34:02.273643 1147232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1928fe2cc4a99946917133c136483b91127c1282b38b4ad7fb0fd274625b9f3c 
	I0731 21:34:02.274138 1147232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:34:02.274200 1147232 cni.go:84] Creating CNI manager for ""
	I0731 21:34:02.274221 1147232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:34:02.275876 1147232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:34:02.277208 1147232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:34:02.287526 1147232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:34:02.306070 1147232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:34:02.306192 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.306218 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-563652 minikube.k8s.io/updated_at=2024_07_31T21_34_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=578c9d758a3a1e9afe57056f3521c9dabc3709f1 minikube.k8s.io/name=embed-certs-563652 minikube.k8s.io/primary=true
	I0731 21:34:02.530554 1147232 ops.go:34] apiserver oom_adj: -16
	I0731 21:34:02.530710 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.031525 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:03.530812 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:04.030780 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:02.444274 1148013 pod_ready.go:102] pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.443668 1148013 pod_ready.go:81] duration metric: took 4m0.00729593s for pod "metrics-server-569cc877fc-968kv" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:04.443701 1148013 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:04.443712 1148013 pod_ready.go:38] duration metric: took 4m3.607055366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:04.443731 1148013 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:04.443795 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:04.443885 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:04.483174 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:04.483203 1148013 cri.go:89] found id: ""
	I0731 21:34:04.483212 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:04.483265 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.488570 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:04.488660 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:04.523705 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:04.523734 1148013 cri.go:89] found id: ""
	I0731 21:34:04.523745 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:04.523816 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.528231 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:04.528304 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:04.565303 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:04.565332 1148013 cri.go:89] found id: ""
	I0731 21:34:04.565341 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:04.565394 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.570089 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:04.570172 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:04.604648 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:04.604676 1148013 cri.go:89] found id: ""
	I0731 21:34:04.604686 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:04.604770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.609219 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:04.609306 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:04.644851 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.644876 1148013 cri.go:89] found id: ""
	I0731 21:34:04.644887 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:04.644954 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.649760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:04.649859 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:04.686438 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:04.686466 1148013 cri.go:89] found id: ""
	I0731 21:34:04.686477 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:04.686546 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.690707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:04.690791 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:04.726245 1148013 cri.go:89] found id: ""
	I0731 21:34:04.726276 1148013 logs.go:276] 0 containers: []
	W0731 21:34:04.726284 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:04.726291 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:04.726346 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:04.766009 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:04.766034 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.766038 1148013 cri.go:89] found id: ""
	I0731 21:34:04.766045 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:04.766105 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.770130 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:04.774449 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:04.774479 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:04.822626 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:04.822660 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:04.857618 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:04.857648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:04.908962 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:04.908993 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:04.962708 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:04.962759 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:04.977232 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:04.977271 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:05.109227 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:05.109264 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:05.163213 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:05.163250 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:05.200524 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:05.200564 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:05.242464 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:05.242501 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:05.278233 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:05.278263 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:05.328930 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:05.328975 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:05.367827 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:05.367860 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:04.140237 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:06.641725 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:04.531795 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.030854 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:05.530821 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.031777 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:06.531171 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.030885 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:07.531555 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.031798 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.531512 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:09.031778 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:08.349628 1148013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:08.364164 1148013 api_server.go:72] duration metric: took 4m15.266433533s to wait for apiserver process to appear ...
	I0731 21:34:08.364205 1148013 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:08.364257 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:08.364321 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:08.398165 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:08.398194 1148013 cri.go:89] found id: ""
	I0731 21:34:08.398205 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:08.398270 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.402707 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:08.402780 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:08.444972 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:08.444998 1148013 cri.go:89] found id: ""
	I0731 21:34:08.445007 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:08.445067 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.449385 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:08.449458 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:08.487006 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:08.487040 1148013 cri.go:89] found id: ""
	I0731 21:34:08.487053 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:08.487123 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.491544 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:08.491618 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:08.526239 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:08.526271 1148013 cri.go:89] found id: ""
	I0731 21:34:08.526282 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:08.526334 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.530760 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:08.530864 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:08.579799 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:08.579829 1148013 cri.go:89] found id: ""
	I0731 21:34:08.579844 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:08.579910 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.584172 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:08.584244 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:08.624614 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.624689 1148013 cri.go:89] found id: ""
	I0731 21:34:08.624703 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:08.624770 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.629264 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:08.629340 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:08.669137 1148013 cri.go:89] found id: ""
	I0731 21:34:08.669170 1148013 logs.go:276] 0 containers: []
	W0731 21:34:08.669181 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:08.669189 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:08.669256 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:08.712145 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.712174 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:08.712179 1148013 cri.go:89] found id: ""
	I0731 21:34:08.712187 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:08.712246 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.717005 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:08.720992 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:08.721024 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:08.775824 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:08.775876 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:08.822904 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:08.822940 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:09.279585 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:09.279641 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:09.328597 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:09.328635 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:09.382901 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:09.382959 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:09.397461 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:09.397500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:09.437452 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:09.437494 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:09.472580 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:09.472614 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:09.512902 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:09.512938 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:09.558351 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:09.558394 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:09.669960 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:09.670001 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:09.714731 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:09.714770 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:09.140243 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:11.639122 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:09.531101 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.031417 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:10.531369 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.031687 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:11.530902 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.030877 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:12.531359 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.030850 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:13.530829 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.030737 1147232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:34:14.137727 1147232 kubeadm.go:1113] duration metric: took 11.831600904s to wait for elevateKubeSystemPrivileges
	I0731 21:34:14.137775 1147232 kubeadm.go:394] duration metric: took 5m10.826279216s to StartCluster
	I0731 21:34:14.137810 1147232 settings.go:142] acquiring lock: {Name:mk8a252a8f640d07862f2ed638fe448bfe89b0e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.137941 1147232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:34:14.140680 1147232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19360-1093692/kubeconfig: {Name:mk8eb958100b302d3386f32db61ca0372302d31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:34:14.141051 1147232 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:34:14.141091 1147232 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:34:14.141199 1147232 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-563652"
	I0731 21:34:14.141240 1147232 addons.go:69] Setting default-storageclass=true in profile "embed-certs-563652"
	I0731 21:34:14.141263 1147232 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-563652"
	W0731 21:34:14.141272 1147232 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:34:14.141291 1147232 config.go:182] Loaded profile config "embed-certs-563652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:34:14.141302 1147232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-563652"
	I0731 21:34:14.141309 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141337 1147232 addons.go:69] Setting metrics-server=true in profile "embed-certs-563652"
	I0731 21:34:14.141362 1147232 addons.go:234] Setting addon metrics-server=true in "embed-certs-563652"
	W0731 21:34:14.141373 1147232 addons.go:243] addon metrics-server should already be in state true
	I0731 21:34:14.141400 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.141735 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141802 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141745 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.141876 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.141748 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.142070 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.143403 1147232 out.go:177] * Verifying Kubernetes components...
	I0731 21:34:14.144894 1147232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:34:14.160359 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0731 21:34:14.160405 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0731 21:34:14.160631 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0731 21:34:14.160893 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.160996 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161048 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.161478 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161497 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161643 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161657 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.161721 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.161749 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.162028 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162069 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162029 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.162250 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.162557 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162596 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.162654 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.162675 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.166106 1147232 addons.go:234] Setting addon default-storageclass=true in "embed-certs-563652"
	W0731 21:34:14.166129 1147232 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:34:14.166153 1147232 host.go:66] Checking if "embed-certs-563652" exists ...
	I0731 21:34:14.166426 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.166463 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.179941 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I0731 21:34:14.180522 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.181056 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.181077 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.181522 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.181726 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.182994 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0731 21:34:14.183599 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.183753 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.183958 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0731 21:34:14.184127 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.184145 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.184538 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.184645 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185028 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.185047 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.185306 1147232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:34:14.185343 1147232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:34:14.185458 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.185527 1147232 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:34:14.185650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.186884 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:34:14.186912 1147232 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:34:14.186937 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.187442 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.189035 1147232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:34:14.190019 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190617 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.190644 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.190680 1147232 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.190700 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:34:14.190725 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.191369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.191607 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.191893 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.192265 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.194023 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194383 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.194407 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.194650 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.194852 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.195073 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.195233 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.207044 1147232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0731 21:34:14.207748 1147232 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:34:14.208292 1147232 main.go:141] libmachine: Using API Version  1
	I0731 21:34:14.208319 1147232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:34:14.208759 1147232 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:34:14.208962 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetState
	I0731 21:34:14.210554 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .DriverName
	I0731 21:34:14.210881 1147232 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.210902 1147232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:34:14.210925 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHHostname
	I0731 21:34:14.214212 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214803 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:4d:dd", ip: ""} in network mk-embed-certs-563652: {Iface:virbr4 ExpiryTime:2024-07-31 22:28:47 +0000 UTC Type:0 Mac:52:54:00:f3:4d:dd Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:embed-certs-563652 Clientid:01:52:54:00:f3:4d:dd}
	I0731 21:34:14.215026 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | domain embed-certs-563652 has defined IP address 192.168.50.203 and MAC address 52:54:00:f3:4d:dd in network mk-embed-certs-563652
	I0731 21:34:14.214918 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHPort
	I0731 21:34:14.216141 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHKeyPath
	I0731 21:34:14.216369 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .GetSSHUsername
	I0731 21:34:14.216583 1147232 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/embed-certs-563652/id_rsa Username:docker}
	I0731 21:34:14.360826 1147232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:34:14.379220 1147232 node_ready.go:35] waiting up to 6m0s for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387294 1147232 node_ready.go:49] node "embed-certs-563652" has status "Ready":"True"
	I0731 21:34:14.387331 1147232 node_ready.go:38] duration metric: took 8.073597ms for node "embed-certs-563652" to be "Ready" ...
	I0731 21:34:14.387344 1147232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.392589 1147232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400252 1147232 pod_ready.go:92] pod "etcd-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.400276 1147232 pod_ready.go:81] duration metric: took 7.654503ms for pod "etcd-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.400285 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405540 1147232 pod_ready.go:92] pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.405564 1147232 pod_ready.go:81] duration metric: took 5.273822ms for pod "kube-apiserver-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.405573 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410097 1147232 pod_ready.go:92] pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.410118 1147232 pod_ready.go:81] duration metric: took 4.539492ms for pod "kube-controller-manager-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.410127 1147232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414070 1147232 pod_ready.go:92] pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace has status "Ready":"True"
	I0731 21:34:14.414094 1147232 pod_ready.go:81] duration metric: took 3.961128ms for pod "kube-scheduler-embed-certs-563652" in "kube-system" namespace to be "Ready" ...
	I0731 21:34:14.414101 1147232 pod_ready.go:38] duration metric: took 26.744925ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:14.414117 1147232 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:14.414166 1147232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:14.427922 1147232 api_server.go:72] duration metric: took 286.820645ms to wait for apiserver process to appear ...
	I0731 21:34:14.427955 1147232 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:14.427976 1147232 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0731 21:34:14.433697 1147232 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0731 21:34:14.435062 1147232 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:14.435088 1147232 api_server.go:131] duration metric: took 7.125728ms to wait for apiserver health ...
	I0731 21:34:14.435096 1147232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:10.689650 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:34:10.690301 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:10.690529 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:14.497864 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:34:14.523526 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:34:14.523560 1147232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:34:14.523656 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:34:14.552390 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:34:14.552424 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:34:14.586389 1147232 system_pods.go:59] 4 kube-system pods found
	I0731 21:34:14.586421 1147232 system_pods.go:61] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.586426 1147232 system_pods.go:61] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.586429 1147232 system_pods.go:61] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.586433 1147232 system_pods.go:61] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.586440 1147232 system_pods.go:74] duration metric: took 151.337561ms to wait for pod list to return data ...
	I0731 21:34:14.586448 1147232 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:14.613255 1147232 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.613292 1147232 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:34:14.677966 1147232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:34:14.728484 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.728522 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.728906 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.728971 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.728992 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729005 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.729016 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.729280 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.729300 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.729315 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736315 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:14.736340 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:14.736605 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:14.736611 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:14.736628 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:14.783127 1147232 default_sa.go:45] found service account: "default"
	I0731 21:34:14.783169 1147232 default_sa.go:55] duration metric: took 196.713133ms for default service account to be created ...
	I0731 21:34:14.783181 1147232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:14.998421 1147232 system_pods.go:86] 5 kube-system pods found
	I0731 21:34:14.998459 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:14.998467 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:14.998476 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:14.998483 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending
	I0731 21:34:14.998488 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:14.998528 1147232 retry.go:31] will retry after 204.720881ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.239227 1147232 system_pods.go:86] 7 kube-system pods found
	I0731 21:34:15.239260 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending
	I0731 21:34:15.239268 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.239275 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.239281 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.239285 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.239291 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.239295 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.239316 1147232 retry.go:31] will retry after 274.032375ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.470562 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.470596 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.470970 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471046 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471059 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.471070 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.471082 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.471345 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.471384 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.471395 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.530409 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.530454 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530467 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.530475 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.530483 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.530493 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.530501 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.530510 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.530540 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.530554 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.530575 1147232 retry.go:31] will retry after 306.456007ms: missing components: kube-dns, kube-proxy
	I0731 21:34:15.572796 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.572829 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573170 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573210 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573232 1147232 main.go:141] libmachine: Making call to close driver server
	I0731 21:34:15.573245 1147232 main.go:141] libmachine: (embed-certs-563652) Calling .Close
	I0731 21:34:15.573542 1147232 main.go:141] libmachine: (embed-certs-563652) DBG | Closing plugin on server side
	I0731 21:34:15.573591 1147232 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:34:15.573612 1147232 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:34:15.573631 1147232 addons.go:475] Verifying addon metrics-server=true in "embed-certs-563652"
	I0731 21:34:15.576124 1147232 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0731 21:34:12.254258 1148013 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8444/healthz ...
	I0731 21:34:12.259093 1148013 api_server.go:279] https://192.168.39.145:8444/healthz returned 200:
	ok
	I0731 21:34:12.260261 1148013 api_server.go:141] control plane version: v1.30.3
	I0731 21:34:12.260290 1148013 api_server.go:131] duration metric: took 3.896077544s to wait for apiserver health ...
	I0731 21:34:12.260299 1148013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:12.260325 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:12.260383 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:12.302317 1148013 cri.go:89] found id: "147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.302350 1148013 cri.go:89] found id: ""
	I0731 21:34:12.302361 1148013 logs.go:276] 1 containers: [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329]
	I0731 21:34:12.302435 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.306733 1148013 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:12.306821 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:12.342694 1148013 cri.go:89] found id: "4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:12.342719 1148013 cri.go:89] found id: ""
	I0731 21:34:12.342728 1148013 logs.go:276] 1 containers: [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a]
	I0731 21:34:12.342788 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.346762 1148013 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:12.346848 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:12.382747 1148013 cri.go:89] found id: "bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.382772 1148013 cri.go:89] found id: ""
	I0731 21:34:12.382782 1148013 logs.go:276] 1 containers: [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999]
	I0731 21:34:12.382851 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.386891 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:12.386988 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:12.424735 1148013 cri.go:89] found id: "4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.424768 1148013 cri.go:89] found id: ""
	I0731 21:34:12.424777 1148013 logs.go:276] 1 containers: [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d]
	I0731 21:34:12.424842 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.430109 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:12.430193 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:12.466432 1148013 cri.go:89] found id: "09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.466457 1148013 cri.go:89] found id: ""
	I0731 21:34:12.466464 1148013 logs.go:276] 1 containers: [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d]
	I0731 21:34:12.466520 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.470677 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:12.470761 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:12.509821 1148013 cri.go:89] found id: "cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:12.509847 1148013 cri.go:89] found id: ""
	I0731 21:34:12.509858 1148013 logs.go:276] 1 containers: [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82]
	I0731 21:34:12.509926 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.514114 1148013 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:12.514199 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:12.560780 1148013 cri.go:89] found id: ""
	I0731 21:34:12.560810 1148013 logs.go:276] 0 containers: []
	W0731 21:34:12.560831 1148013 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:12.560841 1148013 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:12.560911 1148013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:12.611528 1148013 cri.go:89] found id: "d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:12.611560 1148013 cri.go:89] found id: "f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.611566 1148013 cri.go:89] found id: ""
	I0731 21:34:12.611575 1148013 logs.go:276] 2 containers: [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247]
	I0731 21:34:12.611643 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.615972 1148013 ssh_runner.go:195] Run: which crictl
	I0731 21:34:12.620046 1148013 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:12.620072 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:12.733715 1148013 logs.go:123] Gathering logs for kube-apiserver [147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329] ...
	I0731 21:34:12.733761 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 147ee230f5cd22e78dd24a8c88da7d061c9de0be78fd1b25efd97271252a3329"
	I0731 21:34:12.785864 1148013 logs.go:123] Gathering logs for coredns [bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999] ...
	I0731 21:34:12.785915 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcb32c8ad4c0bde66a81ac380cc3a2ccdff70726038edf0d8dfe4d403a475999"
	I0731 21:34:12.829467 1148013 logs.go:123] Gathering logs for kube-scheduler [4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d] ...
	I0731 21:34:12.829510 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c93a360c730db31dd7bc792db7ddd10343b56cd54c6a5a0a79842e1c152680d"
	I0731 21:34:12.867566 1148013 logs.go:123] Gathering logs for kube-proxy [09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d] ...
	I0731 21:34:12.867599 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09a74d133e024ea9793172a13d35b2f9854e9fb573fd61f253935c1273ce9b9d"
	I0731 21:34:12.908038 1148013 logs.go:123] Gathering logs for storage-provisioner [f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247] ...
	I0731 21:34:12.908073 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7bd90ab6a69f20dd9d3d4dd351e09c2cb63c6199f5f88f12ed521d27d475247"
	I0731 21:34:12.945425 1148013 logs.go:123] Gathering logs for container status ...
	I0731 21:34:12.945471 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:12.994911 1148013 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:12.994948 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:13.061451 1148013 logs.go:123] Gathering logs for etcd [4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a] ...
	I0731 21:34:13.061500 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc8ee4ac01a6bb5502123cf833ae0d9b68e25682994e3b72c9199de0ad2c34a"
	I0731 21:34:13.107896 1148013 logs.go:123] Gathering logs for kube-controller-manager [cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82] ...
	I0731 21:34:13.107947 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc7cd56cee77f1a7fe20d27b85e0f6567f166ff02d4e1fc8139a3a1fe0957c82"
	I0731 21:34:13.164585 1148013 logs.go:123] Gathering logs for storage-provisioner [d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027] ...
	I0731 21:34:13.164627 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d88829a348f0a4b6413bb642b45467193655a973feb3f6b015a598bf0310b027"
	I0731 21:34:13.206615 1148013 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:13.206648 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:13.587405 1148013 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:13.587453 1148013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:16.108951 1148013 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:16.108985 1148013 system_pods.go:61] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.108990 1148013 system_pods.go:61] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.108994 1148013 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.108998 1148013 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.109001 1148013 system_pods.go:61] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.109004 1148013 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.109010 1148013 system_pods.go:61] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.109015 1148013 system_pods.go:61] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.109023 1148013 system_pods.go:74] duration metric: took 3.848717497s to wait for pod list to return data ...
	I0731 21:34:16.109031 1148013 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:16.112076 1148013 default_sa.go:45] found service account: "default"
	I0731 21:34:16.112124 1148013 default_sa.go:55] duration metric: took 3.083038ms for default service account to be created ...
	I0731 21:34:16.112135 1148013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:16.118191 1148013 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:16.118232 1148013 system_pods.go:89] "coredns-7db6d8ff4d-t9v4z" [2b2a16bc-571e-4d00-b12a-f50dc462f48f] Running
	I0731 21:34:16.118242 1148013 system_pods.go:89] "etcd-default-k8s-diff-port-755535" [d3c7f990-2767-4f89-a45f-c7aae383edfa] Running
	I0731 21:34:16.118250 1148013 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-755535" [da93e45e-e0df-4fb4-bd56-1996aaeb01ec] Running
	I0731 21:34:16.118256 1148013 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-755535" [1ed72e7b-fd28-4390-952b-6ae495cca1df] Running
	I0731 21:34:16.118263 1148013 system_pods.go:89] "kube-proxy-mqcmt" [476ef297-b803-4125-980a-dc5501361d71] Running
	I0731 21:34:16.118269 1148013 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-755535" [8878d335-2e12-41d4-82f3-40a9a08364f9] Running
	I0731 21:34:16.118303 1148013 system_pods.go:89] "metrics-server-569cc877fc-968kv" [c144d022-c820-43eb-bed1-80f2dca27ac0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.118321 1148013 system_pods.go:89] "storage-provisioner" [98ff2805-3db9-4c39-9a70-77073d33e3bd] Running
	I0731 21:34:16.118333 1148013 system_pods.go:126] duration metric: took 6.190349ms to wait for k8s-apps to be running ...
	I0731 21:34:16.118344 1148013 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.118404 1148013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.137723 1148013 system_svc.go:56] duration metric: took 19.365234ms WaitForService to wait for kubelet
	I0731 21:34:16.137753 1148013 kubeadm.go:582] duration metric: took 4m23.040028763s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.137781 1148013 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.141708 1148013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.141737 1148013 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.141805 1148013 node_conditions.go:105] duration metric: took 4.017229ms to run NodePressure ...
	I0731 21:34:16.141831 1148013 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.141849 1148013 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.141868 1148013 start.go:255] writing updated cluster config ...
	I0731 21:34:16.142163 1148013 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.203520 1148013 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.205072 1148013 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-755535" cluster and "default" namespace by default
	I0731 21:34:13.639431 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.640300 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:15.577285 1147232 addons.go:510] duration metric: took 1.436190545s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0731 21:34:15.848446 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:15.848480 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848487 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:15.848496 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:15.848502 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:15.848507 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:15.848512 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:15.848516 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:15.848522 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:15.848527 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:15.848545 1147232 retry.go:31] will retry after 538.9255ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.397869 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.397924 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397937 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:34:16.397946 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.397954 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.397962 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.397972 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:34:16.397979 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.397989 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.398003 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:34:16.398152 1147232 retry.go:31] will retry after 511.77725ms: missing components: kube-dns, kube-proxy
	I0731 21:34:16.917181 1147232 system_pods.go:86] 9 kube-system pods found
	I0731 21:34:16.917219 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h54vh" [fd09813a-38fd-4620-8b89-67dbf0ba4173] Running
	I0731 21:34:16.917228 1147232 system_pods.go:89] "coredns-7db6d8ff4d-h6wll" [16a3c2ad-faff-49cf-8a56-d36681b771c2] Running
	I0731 21:34:16.917234 1147232 system_pods.go:89] "etcd-embed-certs-563652" [34d5c42e-32f6-4170-8fb3-5d230253e329] Running
	I0731 21:34:16.917240 1147232 system_pods.go:89] "kube-apiserver-embed-certs-563652" [0def03e3-b5eb-4221-9b39-4d64e286a948] Running
	I0731 21:34:16.917248 1147232 system_pods.go:89] "kube-controller-manager-embed-certs-563652" [19736f1c-dfc3-4ef7-a3a0-97f28711bb7b] Running
	I0731 21:34:16.917256 1147232 system_pods.go:89] "kube-proxy-j6jnw" [8e59f643-6f37-4f5e-a862-89a39008af1a] Running
	I0731 21:34:16.917261 1147232 system_pods.go:89] "kube-scheduler-embed-certs-563652" [2b461139-8ec8-4c9a-871c-0fcef0d0d750] Running
	I0731 21:34:16.917272 1147232 system_pods.go:89] "metrics-server-569cc877fc-7fxm2" [2651e359-a15a-4958-a9bb-9080efbd6943] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:16.917279 1147232 system_pods.go:89] "storage-provisioner" [c0f1c311-1547-42ea-b1ad-cefdf7ffeba0] Running
	I0731 21:34:16.917295 1147232 system_pods.go:126] duration metric: took 2.134102549s to wait for k8s-apps to be running ...
	I0731 21:34:16.917310 1147232 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:16.917365 1147232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:16.932647 1147232 system_svc.go:56] duration metric: took 15.322111ms WaitForService to wait for kubelet
	I0731 21:34:16.932702 1147232 kubeadm.go:582] duration metric: took 2.791596331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:16.932730 1147232 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:16.935567 1147232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:16.935589 1147232 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:16.935600 1147232 node_conditions.go:105] duration metric: took 2.864432ms to run NodePressure ...
	I0731 21:34:16.935614 1147232 start.go:241] waiting for startup goroutines ...
	I0731 21:34:16.935621 1147232 start.go:246] waiting for cluster config update ...
	I0731 21:34:16.935631 1147232 start.go:255] writing updated cluster config ...
	I0731 21:34:16.935948 1147232 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:16.990670 1147232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:34:16.992682 1147232 out.go:177] * Done! kubectl is now configured to use "embed-certs-563652" cluster and "default" namespace by default
	I0731 21:34:15.690878 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:15.691156 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:18.139818 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:20.639113 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:23.140314 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.641086 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:25.691455 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:25.691639 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:34:28.139044 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:30.140499 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:32.640931 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:35.139207 1146656 pod_ready.go:102] pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace has status "Ready":"False"
	I0731 21:34:36.640291 1146656 pod_ready.go:81] duration metric: took 4m0.007535985s for pod "metrics-server-78fcd8795b-c7lxw" in "kube-system" namespace to be "Ready" ...
	E0731 21:34:36.640323 1146656 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:34:36.640334 1146656 pod_ready.go:38] duration metric: took 4m7.419160814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:34:36.640354 1146656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:34:36.640393 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:36.640454 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:36.688629 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:36.688658 1146656 cri.go:89] found id: ""
	I0731 21:34:36.688668 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:36.688747 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.693261 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:36.693349 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:36.730997 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:36.731021 1146656 cri.go:89] found id: ""
	I0731 21:34:36.731028 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:36.731079 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.737624 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:36.737692 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:36.780734 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:36.780758 1146656 cri.go:89] found id: ""
	I0731 21:34:36.780769 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:36.780831 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.784767 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:36.784839 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:36.824129 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:36.824164 1146656 cri.go:89] found id: ""
	I0731 21:34:36.824174 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:36.824246 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.828299 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:36.828380 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:36.863976 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:36.864008 1146656 cri.go:89] found id: ""
	I0731 21:34:36.864017 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:36.864081 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.868516 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:36.868594 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:36.903106 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:36.903137 1146656 cri.go:89] found id: ""
	I0731 21:34:36.903148 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:36.903212 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.907260 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:36.907327 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:36.943921 1146656 cri.go:89] found id: ""
	I0731 21:34:36.943955 1146656 logs.go:276] 0 containers: []
	W0731 21:34:36.943963 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:36.943969 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:36.944025 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:36.979295 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:36.979327 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:36.979334 1146656 cri.go:89] found id: ""
	I0731 21:34:36.979345 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:36.979403 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.984464 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:36.988471 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:36.988511 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:37.121952 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:37.121995 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:37.169494 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:37.169546 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:37.205544 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:37.205577 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:37.255892 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:37.255930 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:37.292002 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:37.292036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:37.327852 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:37.327881 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:37.367753 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:37.367795 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:37.419399 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:37.419443 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:37.432894 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:37.432938 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:37.474408 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:37.474454 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:37.508203 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:37.508246 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:37.550030 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:37.550072 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:40.551728 1146656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:34:40.566959 1146656 api_server.go:72] duration metric: took 4m19.080511832s to wait for apiserver process to appear ...
	I0731 21:34:40.567027 1146656 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:34:40.567085 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:40.567153 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:40.617492 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:40.617529 1146656 cri.go:89] found id: ""
	I0731 21:34:40.617539 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:40.617605 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.621950 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:40.622023 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:40.664964 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:40.664990 1146656 cri.go:89] found id: ""
	I0731 21:34:40.664998 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:40.665052 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.669257 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:40.669353 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:40.705806 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:40.705842 1146656 cri.go:89] found id: ""
	I0731 21:34:40.705854 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:40.705920 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.710069 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:40.710146 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:40.746331 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:40.746358 1146656 cri.go:89] found id: ""
	I0731 21:34:40.746368 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:40.746420 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.754270 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:40.754364 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:40.791320 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:40.791356 1146656 cri.go:89] found id: ""
	I0731 21:34:40.791367 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:40.791435 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.795691 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:40.795773 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:40.835548 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:40.835578 1146656 cri.go:89] found id: ""
	I0731 21:34:40.835589 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:40.835652 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.839854 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:40.839939 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:40.874322 1146656 cri.go:89] found id: ""
	I0731 21:34:40.874358 1146656 logs.go:276] 0 containers: []
	W0731 21:34:40.874369 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:40.874379 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:40.874448 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:40.922665 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:40.922691 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.922695 1146656 cri.go:89] found id: ""
	I0731 21:34:40.922703 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:40.922762 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.926750 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:40.930612 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:40.930640 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:40.966656 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:40.966695 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:41.401560 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:41.401622 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:41.503991 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:41.504036 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:41.552765 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:41.552816 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:41.588315 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:41.588353 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:41.639790 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:41.639832 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:41.679851 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:41.679891 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:41.716182 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:41.716219 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:41.762445 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:41.762493 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:41.815762 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:41.815810 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:41.829753 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:41.829794 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:41.874703 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:41.874745 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.415559 1146656 api_server.go:253] Checking apiserver healthz at https://192.168.61.246:8443/healthz ...
	I0731 21:34:44.420498 1146656 api_server.go:279] https://192.168.61.246:8443/healthz returned 200:
	ok
	I0731 21:34:44.421648 1146656 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:34:44.421678 1146656 api_server.go:131] duration metric: took 3.854640091s to wait for apiserver health ...
	I0731 21:34:44.421690 1146656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:34:44.421724 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:34:44.421786 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:34:44.456716 1146656 cri.go:89] found id: "a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.456744 1146656 cri.go:89] found id: ""
	I0731 21:34:44.456755 1146656 logs.go:276] 1 containers: [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396]
	I0731 21:34:44.456809 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.460762 1146656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:34:44.460836 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:34:44.498325 1146656 cri.go:89] found id: "d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:44.498352 1146656 cri.go:89] found id: ""
	I0731 21:34:44.498361 1146656 logs.go:276] 1 containers: [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6]
	I0731 21:34:44.498416 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.502344 1146656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:34:44.502424 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:34:44.538766 1146656 cri.go:89] found id: "efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.538799 1146656 cri.go:89] found id: ""
	I0731 21:34:44.538809 1146656 logs.go:276] 1 containers: [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88]
	I0731 21:34:44.538874 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.542853 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:34:44.542946 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:34:44.578142 1146656 cri.go:89] found id: "e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:44.578175 1146656 cri.go:89] found id: ""
	I0731 21:34:44.578185 1146656 logs.go:276] 1 containers: [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618]
	I0731 21:34:44.578241 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.582494 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:34:44.582574 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:34:44.631110 1146656 cri.go:89] found id: "1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:44.631141 1146656 cri.go:89] found id: ""
	I0731 21:34:44.631149 1146656 logs.go:276] 1 containers: [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca]
	I0731 21:34:44.631208 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.635618 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:34:44.635693 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:34:44.669607 1146656 cri.go:89] found id: "8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:44.669633 1146656 cri.go:89] found id: ""
	I0731 21:34:44.669643 1146656 logs.go:276] 1 containers: [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3]
	I0731 21:34:44.669702 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.673967 1146656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:34:44.674052 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:34:44.723388 1146656 cri.go:89] found id: ""
	I0731 21:34:44.723417 1146656 logs.go:276] 0 containers: []
	W0731 21:34:44.723426 1146656 logs.go:278] No container was found matching "kindnet"
	I0731 21:34:44.723433 1146656 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:34:44.723485 1146656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:34:44.759398 1146656 cri.go:89] found id: "a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:44.759423 1146656 cri.go:89] found id: "c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:44.759429 1146656 cri.go:89] found id: ""
	I0731 21:34:44.759438 1146656 logs.go:276] 2 containers: [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f]
	I0731 21:34:44.759506 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.765787 1146656 ssh_runner.go:195] Run: which crictl
	I0731 21:34:44.769602 1146656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:34:44.769627 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:34:44.783608 1146656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:34:44.783646 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:34:44.897376 1146656 logs.go:123] Gathering logs for kube-apiserver [a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396] ...
	I0731 21:34:44.897415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a11eb6669e85ee9b7299af2794c57ca700617e90aafd72bdf83840b7a266f396"
	I0731 21:34:44.941518 1146656 logs.go:123] Gathering logs for coredns [efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88] ...
	I0731 21:34:44.941558 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efba76f74230d1ffe0e9c0eea087b69bf61c40c97faad9328006b09832ab8d88"
	I0731 21:34:44.976285 1146656 logs.go:123] Gathering logs for kube-proxy [1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca] ...
	I0731 21:34:44.976319 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aa83cc70feca9ecffbeab1b9171268b5babd5f10a25cc5afa854d4498e994ca"
	I0731 21:34:45.015310 1146656 logs.go:123] Gathering logs for kube-controller-manager [8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3] ...
	I0731 21:34:45.015343 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d94e11c56302f3e08240575918f89ad48027bdad3b491273a5550e854380cc3"
	I0731 21:34:45.076253 1146656 logs.go:123] Gathering logs for storage-provisioner [a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca] ...
	I0731 21:34:45.076298 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4d6f8d417836ce57d6a07edf7c9484e07b884ea1231d96acd5e1349b3b124ca"
	I0731 21:34:45.114621 1146656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:34:45.114656 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:34:45.171369 1146656 logs.go:123] Gathering logs for etcd [d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6] ...
	I0731 21:34:45.171415 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d614beb36e5ab3d6e5a927400753177dbeb0ceb262ebe34b1be0393b091504d6"
	I0731 21:34:45.219450 1146656 logs.go:123] Gathering logs for kube-scheduler [e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618] ...
	I0731 21:34:45.219492 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71c179bd22e964da5657303629ee8bd946f9a203ea35ea2b7eec7249d5c2618"
	I0731 21:34:45.254864 1146656 logs.go:123] Gathering logs for storage-provisioner [c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f] ...
	I0731 21:34:45.254901 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c579a97b62d1df94ce363fbd72d494a9fe160d1e2d9a0870135e726e904b1f9f"
	I0731 21:34:45.289962 1146656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:34:45.289999 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:34:45.660050 1146656 logs.go:123] Gathering logs for container status ...
	I0731 21:34:45.660113 1146656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:34:48.211383 1146656 system_pods.go:59] 8 kube-system pods found
	I0731 21:34:48.211418 1146656 system_pods.go:61] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.211423 1146656 system_pods.go:61] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.211427 1146656 system_pods.go:61] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.211431 1146656 system_pods.go:61] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.211435 1146656 system_pods.go:61] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.211440 1146656 system_pods.go:61] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.211449 1146656 system_pods.go:61] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.211456 1146656 system_pods.go:61] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.211467 1146656 system_pods.go:74] duration metric: took 3.789769058s to wait for pod list to return data ...
	I0731 21:34:48.211490 1146656 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:34:48.214462 1146656 default_sa.go:45] found service account: "default"
	I0731 21:34:48.214492 1146656 default_sa.go:55] duration metric: took 2.992385ms for default service account to be created ...
	I0731 21:34:48.214501 1146656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:34:48.220257 1146656 system_pods.go:86] 8 kube-system pods found
	I0731 21:34:48.220289 1146656 system_pods.go:89] "coredns-5cfdc65f69-9w4w4" [a8ee0da2-837d-46d8-9615-1021a5ad28b9] Running
	I0731 21:34:48.220295 1146656 system_pods.go:89] "etcd-no-preload-018891" [6773d9d6-82fd-4850-9920-3906d50f7417] Running
	I0731 21:34:48.220299 1146656 system_pods.go:89] "kube-apiserver-no-preload-018891" [9941a5d9-67dd-41d8-84a2-a4b50161fde7] Running
	I0731 21:34:48.220304 1146656 system_pods.go:89] "kube-controller-manager-no-preload-018891" [e70f8e2e-7810-409d-af6b-f30c44dd91da] Running
	I0731 21:34:48.220309 1146656 system_pods.go:89] "kube-proxy-x2dnn" [3a6403e5-f31e-4e5a-ba4f-32bc746c18ec] Running
	I0731 21:34:48.220313 1146656 system_pods.go:89] "kube-scheduler-no-preload-018891" [d9a394c1-9ef9-43e8-9b69-7abb9bbfbe65] Running
	I0731 21:34:48.220322 1146656 system_pods.go:89] "metrics-server-78fcd8795b-c7lxw" [6b18e5a9-5996-4650-97ea-204405ba9d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:34:48.220328 1146656 system_pods.go:89] "storage-provisioner" [35fc2f0d-7f78-4a87-83a1-94558267b235] Running
	I0731 21:34:48.220339 1146656 system_pods.go:126] duration metric: took 5.831037ms to wait for k8s-apps to be running ...
	I0731 21:34:48.220352 1146656 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:34:48.220404 1146656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:34:48.235707 1146656 system_svc.go:56] duration metric: took 15.341391ms WaitForService to wait for kubelet
	I0731 21:34:48.235747 1146656 kubeadm.go:582] duration metric: took 4m26.749308267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:34:48.235769 1146656 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:34:48.239352 1146656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:34:48.239377 1146656 node_conditions.go:123] node cpu capacity is 2
	I0731 21:34:48.239388 1146656 node_conditions.go:105] duration metric: took 3.614275ms to run NodePressure ...
	I0731 21:34:48.239400 1146656 start.go:241] waiting for startup goroutines ...
	I0731 21:34:48.239407 1146656 start.go:246] waiting for cluster config update ...
	I0731 21:34:48.239418 1146656 start.go:255] writing updated cluster config ...
	I0731 21:34:48.239724 1146656 ssh_runner.go:195] Run: rm -f paused
	I0731 21:34:48.291567 1146656 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:34:48.293377 1146656 out.go:177] * Done! kubectl is now configured to use "no-preload-018891" cluster and "default" namespace by default
	I0731 21:34:45.692895 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:34:45.693194 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695071 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:35:25.695336 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:35:25.695369 1147424 kubeadm.go:310] 
	I0731 21:35:25.695432 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:35:25.695496 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:35:25.695506 1147424 kubeadm.go:310] 
	I0731 21:35:25.695560 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:35:25.695606 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:35:25.695752 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:35:25.695775 1147424 kubeadm.go:310] 
	I0731 21:35:25.695866 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:35:25.695914 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:35:25.695965 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:35:25.695972 1147424 kubeadm.go:310] 
	I0731 21:35:25.696064 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:35:25.696197 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:35:25.696218 1147424 kubeadm.go:310] 
	I0731 21:35:25.696389 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:35:25.696510 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:35:25.696637 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:35:25.696739 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:35:25.696761 1147424 kubeadm.go:310] 
	I0731 21:35:25.697342 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:35:25.697447 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:35:25.697582 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:35:25.697782 1147424 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:35:25.697852 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:35:31.094319 1147424 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.396429611s)
	I0731 21:35:31.094410 1147424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:35:31.109019 1147424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:35:31.118415 1147424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:35:31.118447 1147424 kubeadm.go:157] found existing configuration files:
	
	I0731 21:35:31.118512 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:35:31.129005 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:35:31.129097 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:35:31.139701 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:35:31.149483 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:35:31.149565 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:35:31.158699 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.168151 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:35:31.168225 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:35:31.177911 1147424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:35:31.186739 1147424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:35:31.186821 1147424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:35:31.196779 1147424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:35:31.410613 1147424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:37:27.101986 1147424 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:37:27.102135 1147424 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:37:27.103680 1147424 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:37:27.103742 1147424 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:37:27.103874 1147424 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:37:27.103971 1147424 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:37:27.104056 1147424 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:37:27.104135 1147424 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:37:27.105757 1147424 out.go:204]   - Generating certificates and keys ...
	I0731 21:37:27.105851 1147424 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:37:27.105911 1147424 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:37:27.105982 1147424 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:37:27.106047 1147424 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:37:27.106126 1147424 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:37:27.106185 1147424 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:37:27.106256 1147424 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:37:27.106340 1147424 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:37:27.106446 1147424 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:37:27.106527 1147424 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:37:27.106582 1147424 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:37:27.106669 1147424 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:37:27.106747 1147424 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:37:27.106800 1147424 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:37:27.106853 1147424 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:37:27.106928 1147424 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:37:27.107053 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:37:27.107169 1147424 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:37:27.107233 1147424 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:37:27.107307 1147424 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:37:27.108810 1147424 out.go:204]   - Booting up control plane ...
	I0731 21:37:27.108897 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:37:27.108964 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:37:27.109022 1147424 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:37:27.109090 1147424 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:37:27.109227 1147424 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:37:27.109276 1147424 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:37:27.109346 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109569 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109655 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.109876 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.109947 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110108 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110172 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110334 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110393 1147424 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:37:27.110549 1147424 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:37:27.110556 1147424 kubeadm.go:310] 
	I0731 21:37:27.110589 1147424 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:37:27.110626 1147424 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:37:27.110632 1147424 kubeadm.go:310] 
	I0731 21:37:27.110661 1147424 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:37:27.110707 1147424 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:37:27.110804 1147424 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:37:27.110816 1147424 kubeadm.go:310] 
	I0731 21:37:27.110920 1147424 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:37:27.110965 1147424 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:37:27.110999 1147424 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:37:27.111006 1147424 kubeadm.go:310] 
	I0731 21:37:27.111099 1147424 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:37:27.111173 1147424 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:37:27.111181 1147424 kubeadm.go:310] 
	I0731 21:37:27.111284 1147424 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:37:27.111357 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:37:27.111421 1147424 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:37:27.111501 1147424 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:37:27.111545 1147424 kubeadm.go:310] 
	I0731 21:37:27.111591 1147424 kubeadm.go:394] duration metric: took 8m1.593977042s to StartCluster
	I0731 21:37:27.111642 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:37:27.111732 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:37:27.151036 1147424 cri.go:89] found id: ""
	I0731 21:37:27.151080 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.151092 1147424 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:37:27.151101 1147424 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:37:27.151164 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:37:27.189839 1147424 cri.go:89] found id: ""
	I0731 21:37:27.189877 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.189897 1147424 logs.go:278] No container was found matching "etcd"
	I0731 21:37:27.189906 1147424 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:37:27.189975 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:37:27.224515 1147424 cri.go:89] found id: ""
	I0731 21:37:27.224553 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.224566 1147424 logs.go:278] No container was found matching "coredns"
	I0731 21:37:27.224574 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:37:27.224637 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:37:27.256890 1147424 cri.go:89] found id: ""
	I0731 21:37:27.256927 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.256939 1147424 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:37:27.256948 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:37:27.257017 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:37:27.292320 1147424 cri.go:89] found id: ""
	I0731 21:37:27.292360 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.292373 1147424 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:37:27.292380 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:37:27.292448 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:37:27.327537 1147424 cri.go:89] found id: ""
	I0731 21:37:27.327580 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.327591 1147424 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:37:27.327600 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:37:27.327669 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:37:27.362489 1147424 cri.go:89] found id: ""
	I0731 21:37:27.362522 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.362533 1147424 logs.go:278] No container was found matching "kindnet"
	I0731 21:37:27.362541 1147424 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:37:27.362612 1147424 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:37:27.398531 1147424 cri.go:89] found id: ""
	I0731 21:37:27.398575 1147424 logs.go:276] 0 containers: []
	W0731 21:37:27.398587 1147424 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:37:27.398605 1147424 logs.go:123] Gathering logs for dmesg ...
	I0731 21:37:27.398625 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:37:27.412082 1147424 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:37:27.412129 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:37:27.485574 1147424 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:37:27.485598 1147424 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:37:27.485615 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:37:27.602979 1147424 logs.go:123] Gathering logs for container status ...
	I0731 21:37:27.603026 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:37:27.642075 1147424 logs.go:123] Gathering logs for kubelet ...
	I0731 21:37:27.642108 1147424 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 21:37:27.692811 1147424 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:37:27.692868 1147424 out.go:239] * 
	W0731 21:37:27.692944 1147424 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.692968 1147424 out.go:239] * 
	W0731 21:37:27.693763 1147424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:37:27.697049 1147424 out.go:177] 
	W0731 21:37:27.698454 1147424 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:37:27.698525 1147424 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:37:27.698564 1147424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:37:27.700008 1147424 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.254995157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462498254972749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68425af0-52da-4be2-89e5-ed4b4d8c3cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.255466542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ead03d37-461a-4e0b-ab68-9a6f7ce01630 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.255535762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ead03d37-461a-4e0b-ab68-9a6f7ce01630 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.255583320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ead03d37-461a-4e0b-ab68-9a6f7ce01630 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.285885996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9511e20b-b30c-47d8-8762-a38b2b159b2e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.285974112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9511e20b-b30c-47d8-8762-a38b2b159b2e name=/runtime.v1.RuntimeService/Version
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.287084311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3574a34-d469-4e0b-95e9-cb1d5138d990 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.287448587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462498287425607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3574a34-d469-4e0b-95e9-cb1d5138d990 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.287988815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccdea9e3-59a2-423c-9019-8cdc6f4bb556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.288047807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccdea9e3-59a2-423c-9019-8cdc6f4bb556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.288081565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ccdea9e3-59a2-423c-9019-8cdc6f4bb556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.319487598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c06161d-334e-404d-bb65-a43fe40a723c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.319571306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c06161d-334e-404d-bb65-a43fe40a723c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.320550269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c490579-bc3b-4fc7-abc3-a6908fb875ef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.321041784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462498320978059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c490579-bc3b-4fc7-abc3-a6908fb875ef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.321546295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38b2676d-6821-492e-88f4-e934ea2659ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.321596011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38b2676d-6821-492e-88f4-e934ea2659ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.321627040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=38b2676d-6821-492e-88f4-e934ea2659ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.351801413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bfbdda8-40ff-4207-a7db-4e4e6044b02c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.351895590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bfbdda8-40ff-4207-a7db-4e4e6044b02c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.352818454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d011e0a-80e8-45c2-adf1-46cba883bd4b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.353280144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722462498353258320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d011e0a-80e8-45c2-adf1-46cba883bd4b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.353671980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=533454a8-aafb-4aae-b873-3387e1be97ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.353757083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=533454a8-aafb-4aae-b873-3387e1be97ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:48:18 old-k8s-version-275462 crio[640]: time="2024-07-31 21:48:18.353803927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=533454a8-aafb-4aae-b873-3387e1be97ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 21:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048242] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037912] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.873982] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.920716] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.346172] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.912930] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.065585] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061848] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.166323] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.160547] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.289426] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.100697] systemd-fstab-generator[825]: Ignoring "noauto" option for root device
	[  +0.056106] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.885879] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[ +12.535811] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 21:33] systemd-fstab-generator[4947]: Ignoring "noauto" option for root device
	[Jul31 21:35] systemd-fstab-generator[5234]: Ignoring "noauto" option for root device
	[  +0.067044] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:48:18 up 19 min,  0 users,  load average: 0.00, 0.02, 0.02
	Linux old-k8s-version-275462 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000cbe870)
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]: goroutine 160 [select]:
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b45ef0, 0x4f0ac20, 0xc000cda3c0, 0x1, 0xc0001020c0)
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000254d20, 0xc0001020c0)
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000cd0400, 0xc000cc0c80)
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 31 21:48:17 old-k8s-version-275462 kubelet[6676]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 31 21:48:17 old-k8s-version-275462 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 21:48:17 old-k8s-version-275462 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 21:48:17 old-k8s-version-275462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 133.
	Jul 31 21:48:17 old-k8s-version-275462 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 21:48:17 old-k8s-version-275462 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 21:48:18 old-k8s-version-275462 kubelet[6703]: I0731 21:48:18.038047    6703 server.go:416] Version: v1.20.0
	Jul 31 21:48:18 old-k8s-version-275462 kubelet[6703]: I0731 21:48:18.038382    6703 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 21:48:18 old-k8s-version-275462 kubelet[6703]: I0731 21:48:18.040598    6703 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 21:48:18 old-k8s-version-275462 kubelet[6703]: I0731 21:48:18.041929    6703 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 31 21:48:18 old-k8s-version-275462 kubelet[6703]: W0731 21:48:18.042068    6703 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 2 (235.636357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-275462" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (104.94s)

                                                
                                    

Test pass (258/323)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.85
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 4.46
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 7.46
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 67.83
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 135.14
40 TestAddons/serial/GCPAuth/Namespaces 3.07
42 TestAddons/parallel/Registry 18.73
44 TestAddons/parallel/InspektorGadget 10.82
46 TestAddons/parallel/HelmTiller 10.85
48 TestAddons/parallel/CSI 44.04
49 TestAddons/parallel/Headlamp 18.55
50 TestAddons/parallel/CloudSpanner 6.59
51 TestAddons/parallel/LocalPath 12.38
52 TestAddons/parallel/NvidiaDevicePlugin 5.52
53 TestAddons/parallel/Yakd 10.78
55 TestCertOptions 63.54
56 TestCertExpiration 285.2
58 TestForceSystemdFlag 82.12
59 TestForceSystemdEnv 69.51
61 TestKVMDriverInstallOrUpdate 3.99
65 TestErrorSpam/setup 39.45
66 TestErrorSpam/start 0.35
67 TestErrorSpam/status 0.71
68 TestErrorSpam/pause 1.46
69 TestErrorSpam/unpause 1.49
70 TestErrorSpam/stop 4.16
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 63.62
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 36.17
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.23
82 TestFunctional/serial/CacheCmd/cache/add_local 1.92
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 36.86
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.36
93 TestFunctional/serial/LogsFileCmd 1.37
94 TestFunctional/serial/InvalidService 4.11
96 TestFunctional/parallel/ConfigCmd 0.34
97 TestFunctional/parallel/DashboardCmd 23.23
98 TestFunctional/parallel/DryRun 0.28
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 1.02
104 TestFunctional/parallel/ServiceCmdConnect 6.69
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 45.4
108 TestFunctional/parallel/SSHCmd 0.42
109 TestFunctional/parallel/CpCmd 1.37
110 TestFunctional/parallel/MySQL 22.16
111 TestFunctional/parallel/FileSync 0.22
112 TestFunctional/parallel/CertSync 1.4
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
120 TestFunctional/parallel/License 0.27
121 TestFunctional/parallel/Version/short 0.05
122 TestFunctional/parallel/Version/components 0.44
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.07
128 TestFunctional/parallel/ImageCommands/Setup 1.53
129 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.21
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.75
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.29
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.86
142 TestFunctional/parallel/ServiceCmd/List 0.46
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
145 TestFunctional/parallel/ServiceCmd/Format 0.27
146 TestFunctional/parallel/ServiceCmd/URL 0.28
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.45
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
157 TestFunctional/parallel/MountCmd/any-port 18.91
158 TestFunctional/parallel/ProfileCmd/profile_list 0.3
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
160 TestFunctional/parallel/MountCmd/specific-port 1.97
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.41
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/StartCluster 207.19
169 TestMultiControlPlane/serial/DeployApp 5.36
170 TestMultiControlPlane/serial/PingHostFromPods 1.18
171 TestMultiControlPlane/serial/AddWorkerNode 54.3
172 TestMultiControlPlane/serial/NodeLabels 0.07
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
174 TestMultiControlPlane/serial/CopyFile 12.39
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
180 TestMultiControlPlane/serial/DeleteSecondaryNode 16.86
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
183 TestMultiControlPlane/serial/RestartCluster 340.21
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
185 TestMultiControlPlane/serial/AddSecondaryNode 72.68
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
190 TestJSONOutput/start/Command 55.52
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.65
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.65
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 6.62
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.19
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 85.41
222 TestMountStart/serial/StartWithMountFirst 24.03
223 TestMountStart/serial/VerifyMountFirst 0.4
224 TestMountStart/serial/StartWithMountSecond 28.23
225 TestMountStart/serial/VerifyMountSecond 0.4
226 TestMountStart/serial/DeleteFirst 0.93
227 TestMountStart/serial/VerifyMountPostDelete 0.4
228 TestMountStart/serial/Stop 1.29
229 TestMountStart/serial/RestartStopped 23.07
230 TestMountStart/serial/VerifyMountPostStop 0.41
233 TestMultiNode/serial/FreshStart2Nodes 120.64
234 TestMultiNode/serial/DeployApp2Nodes 4.53
235 TestMultiNode/serial/PingHostFrom2Pods 0.82
236 TestMultiNode/serial/AddNode 48.5
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.23
239 TestMultiNode/serial/CopyFile 7.36
240 TestMultiNode/serial/StopNode 2.26
241 TestMultiNode/serial/StartAfterStop 38.89
243 TestMultiNode/serial/DeleteNode 2.33
245 TestMultiNode/serial/RestartMultiNode 180.19
246 TestMultiNode/serial/ValidateNameConflict 45.6
253 TestScheduledStopUnix 113.65
257 TestRunningBinaryUpgrade 218.38
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 92.05
272 TestPause/serial/Start 98.39
280 TestNetworkPlugins/group/false 3.57
284 TestNoKubernetes/serial/StartWithStopK8s 66.24
285 TestNoKubernetes/serial/Start 24.41
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
288 TestNoKubernetes/serial/ProfileList 31.27
289 TestNoKubernetes/serial/Stop 1.4
290 TestNoKubernetes/serial/StartNoArgs 23.74
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
292 TestStoppedBinaryUpgrade/Setup 0.49
293 TestStoppedBinaryUpgrade/Upgrade 130.94
296 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
298 TestStartStop/group/no-preload/serial/FirstStart 65.82
299 TestStartStop/group/no-preload/serial/DeployApp 8.28
301 TestStartStop/group/embed-certs/serial/FirstStart 65.37
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
304 TestStartStop/group/embed-certs/serial/DeployApp 9.28
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.46
312 TestStartStop/group/no-preload/serial/SecondStart 661.83
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
317 TestStartStop/group/embed-certs/serial/SecondStart 552.85
318 TestStartStop/group/old-k8s-version/serial/Stop 6.29
319 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 410.52
332 TestStartStop/group/newest-cni/serial/FirstStart 46.9
333 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
335 TestStartStop/group/newest-cni/serial/Stop 10.36
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
337 TestStartStop/group/newest-cni/serial/SecondStart 34.98
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/newest-cni/serial/Pause 2.58
342 TestNetworkPlugins/group/auto/Start 61.09
343 TestNetworkPlugins/group/calico/Start 115.88
344 TestNetworkPlugins/group/auto/KubeletFlags 0.22
345 TestNetworkPlugins/group/auto/NetCatPod 11.24
346 TestNetworkPlugins/group/auto/DNS 0.18
347 TestNetworkPlugins/group/custom-flannel/Start 79.93
348 TestNetworkPlugins/group/auto/Localhost 0.15
349 TestNetworkPlugins/group/auto/HairPin 0.15
350 TestNetworkPlugins/group/kindnet/Start 84.64
351 TestNetworkPlugins/group/calico/ControllerPod 6.01
352 TestNetworkPlugins/group/calico/KubeletFlags 0.23
353 TestNetworkPlugins/group/calico/NetCatPod 12.24
354 TestNetworkPlugins/group/calico/DNS 0.17
355 TestNetworkPlugins/group/calico/Localhost 0.14
356 TestNetworkPlugins/group/calico/HairPin 0.14
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.72
359 TestNetworkPlugins/group/flannel/Start 78.32
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.57
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
362 TestNetworkPlugins/group/enable-default-cni/Start 92.59
363 TestNetworkPlugins/group/custom-flannel/DNS 0.15
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
366 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
368 TestNetworkPlugins/group/kindnet/NetCatPod 13.28
369 TestNetworkPlugins/group/bridge/Start 84.33
370 TestNetworkPlugins/group/kindnet/DNS 0.14
371 TestNetworkPlugins/group/kindnet/Localhost 0.14
372 TestNetworkPlugins/group/kindnet/HairPin 0.12
373 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
375 TestNetworkPlugins/group/flannel/NetCatPod 11.26
376 TestNetworkPlugins/group/flannel/DNS 0.16
377 TestNetworkPlugins/group/flannel/Localhost 0.12
378 TestNetworkPlugins/group/flannel/HairPin 0.15
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.23
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
385 TestNetworkPlugins/group/bridge/NetCatPod 10.23
386 TestNetworkPlugins/group/bridge/DNS 0.15
387 TestNetworkPlugins/group/bridge/Localhost 0.12
388 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (8.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-865106 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-865106 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.848224813s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-865106
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-865106: exit status 85 (60.646981ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-865106 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |          |
	|         | -p download-only-865106        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:09:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:09:18.920316 1100989 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:09:18.920423 1100989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:18.920430 1100989 out.go:304] Setting ErrFile to fd 2...
	I0731 20:09:18.920435 1100989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:18.920616 1100989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	W0731 20:09:18.920821 1100989 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19360-1093692/.minikube/config/config.json: open /home/jenkins/minikube-integration/19360-1093692/.minikube/config/config.json: no such file or directory
	I0731 20:09:18.921391 1100989 out.go:298] Setting JSON to true
	I0731 20:09:18.922480 1100989 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13910,"bootTime":1722442649,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:09:18.922553 1100989 start.go:139] virtualization: kvm guest
	I0731 20:09:18.924894 1100989 out.go:97] [download-only-865106] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0731 20:09:18.925007 1100989 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 20:09:18.925082 1100989 notify.go:220] Checking for updates...
	I0731 20:09:18.926463 1100989 out.go:169] MINIKUBE_LOCATION=19360
	I0731 20:09:18.927885 1100989 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:09:18.929502 1100989 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:09:18.931060 1100989 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:18.932508 1100989 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 20:09:18.935114 1100989 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 20:09:18.935344 1100989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:09:18.969121 1100989 out.go:97] Using the kvm2 driver based on user configuration
	I0731 20:09:18.969154 1100989 start.go:297] selected driver: kvm2
	I0731 20:09:18.969163 1100989 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:09:18.969524 1100989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:09:18.969612 1100989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:09:18.985649 1100989 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:09:18.985706 1100989 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 20:09:18.986172 1100989 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 20:09:18.986340 1100989 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 20:09:18.986388 1100989 cni.go:84] Creating CNI manager for ""
	I0731 20:09:18.986395 1100989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:09:18.986401 1100989 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 20:09:18.986455 1100989 start.go:340] cluster config:
	{Name:download-only-865106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-865106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:09:18.986623 1100989 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:09:18.988574 1100989 out.go:97] Downloading VM boot image ...
	I0731 20:09:18.988614 1100989 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 20:09:21.930843 1100989 out.go:97] Starting "download-only-865106" primary control-plane node in "download-only-865106" cluster
	I0731 20:09:21.930890 1100989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:09:21.956689 1100989 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:09:21.956728 1100989 cache.go:56] Caching tarball of preloaded images
	I0731 20:09:21.956896 1100989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:09:21.958633 1100989 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 20:09:21.958646 1100989 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 20:09:21.989373 1100989 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-865106 host does not exist
	  To start a cluster, run: "minikube start -p download-only-865106"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-865106
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-408291 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-408291 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.456383635s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-408291
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-408291: exit status 85 (61.299572ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-865106 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | -p download-only-865106        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| delete  | -p download-only-865106        | download-only-865106 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| start   | -o=json --download-only        | download-only-408291 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | -p download-only-408291        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:09:28
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:09:28.094241 1101196 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:09:28.094520 1101196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:28.094529 1101196 out.go:304] Setting ErrFile to fd 2...
	I0731 20:09:28.094534 1101196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:28.094741 1101196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:09:28.095310 1101196 out.go:298] Setting JSON to true
	I0731 20:09:28.096374 1101196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13919,"bootTime":1722442649,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:09:28.096437 1101196 start.go:139] virtualization: kvm guest
	I0731 20:09:28.098974 1101196 out.go:97] [download-only-408291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:09:28.099146 1101196 notify.go:220] Checking for updates...
	I0731 20:09:28.100704 1101196 out.go:169] MINIKUBE_LOCATION=19360
	I0731 20:09:28.102192 1101196 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:09:28.103587 1101196 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:09:28.105047 1101196 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:28.106681 1101196 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-408291 host does not exist
	  To start a cluster, run: "minikube start -p download-only-408291"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-408291
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (7.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-363533 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-363533 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.456185776s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (7.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-363533
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-363533: exit status 85 (63.20276ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-865106 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | -p download-only-865106             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| delete  | -p download-only-865106             | download-only-865106 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| start   | -o=json --download-only             | download-only-408291 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | -p download-only-408291             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| delete  | -p download-only-408291             | download-only-408291 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC | 31 Jul 24 20:09 UTC |
	| start   | -o=json --download-only             | download-only-363533 | jenkins | v1.33.1 | 31 Jul 24 20:09 UTC |                     |
	|         | -p download-only-363533             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:09:32
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:09:32.875771 1101381 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:09:32.875891 1101381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:32.875900 1101381 out.go:304] Setting ErrFile to fd 2...
	I0731 20:09:32.875905 1101381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:09:32.876109 1101381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:09:32.876683 1101381 out.go:298] Setting JSON to true
	I0731 20:09:32.877698 1101381 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13924,"bootTime":1722442649,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:09:32.877759 1101381 start.go:139] virtualization: kvm guest
	I0731 20:09:32.880158 1101381 out.go:97] [download-only-363533] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:09:32.880315 1101381 notify.go:220] Checking for updates...
	I0731 20:09:32.881806 1101381 out.go:169] MINIKUBE_LOCATION=19360
	I0731 20:09:32.883401 1101381 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:09:32.885136 1101381 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:09:32.886646 1101381 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:09:32.888177 1101381 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 20:09:32.890791 1101381 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 20:09:32.891004 1101381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:09:32.923825 1101381 out.go:97] Using the kvm2 driver based on user configuration
	I0731 20:09:32.923858 1101381 start.go:297] selected driver: kvm2
	I0731 20:09:32.923864 1101381 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:09:32.924280 1101381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:09:32.924382 1101381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19360-1093692/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:09:32.940347 1101381 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:09:32.940403 1101381 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 20:09:32.940937 1101381 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 20:09:32.941084 1101381 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 20:09:32.941133 1101381 cni.go:84] Creating CNI manager for ""
	I0731 20:09:32.941145 1101381 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:09:32.941151 1101381 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 20:09:32.941219 1101381 start.go:340] cluster config:
	{Name:download-only-363533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-363533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:09:32.941307 1101381 iso.go:125] acquiring lock: {Name:mk34d446687dcc517f35c24f3b1478074e0450ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:09:32.943030 1101381 out.go:97] Starting "download-only-363533" primary control-plane node in "download-only-363533" cluster
	I0731 20:09:32.943057 1101381 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 20:09:32.995697 1101381 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:09:32.995732 1101381 cache.go:56] Caching tarball of preloaded images
	I0731 20:09:32.995859 1101381 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 20:09:32.997821 1101381 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 20:09:32.997842 1101381 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 20:09:33.017975 1101381 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19360-1093692/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-363533 host does not exist
	  To start a cluster, run: "minikube start -p download-only-363533"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-363533
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-782974 --alsologtostderr --binary-mirror http://127.0.0.1:40035 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-782974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-782974
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (67.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-044254 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-044254 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.978135623s)
helpers_test.go:175: Cleaning up "offline-crio-044254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-044254
--- PASS: TestOffline (67.83s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-877061
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-877061: exit status 85 (50.609671ms)

                                                
                                                
-- stdout --
	* Profile "addons-877061" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-877061"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-877061
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-877061: exit status 85 (50.955261ms)

                                                
                                                
-- stdout --
	* Profile "addons-877061" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-877061"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (135.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-877061 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-877061 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m15.138807621s)
--- PASS: TestAddons/Setup (135.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (3.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-877061 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-877061 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-877061 get secret gcp-auth -n new-namespace: exit status 1 (77.912502ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-877061 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-877061 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (3.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.87745ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-pgf2q" [40e9667a-bd97-42a3-bb45-e40bc6e3b530] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004489359s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cdmns" [ec3040a1-3e1e-4ba3-9242-35e9ce417ec0] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004608096s
addons_test.go:342: (dbg) Run:  kubectl --context addons-877061 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-877061 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-877061 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.991421583s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 ip
2024/07/31 20:12:57 [DEBUG] GET http://192.168.39.25:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-twrs9" [3f1f5e48-df3f-46de-a9cf-785401e973e3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004148385s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-877061
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-877061: (5.817383946s)
--- PASS: TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.434445ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-7dwjf" [b2e84403-dfb7-4445-83e9-f9864386e974] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003777615s
addons_test.go:475: (dbg) Run:  kubectl --context addons-877061 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-877061 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.307731241s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.181901ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-877061 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-877061 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c41ace5f-6257-436d-91ff-3462ecaa823f] Pending
helpers_test.go:344: "task-pv-pod" [c41ace5f-6257-436d-91ff-3462ecaa823f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c41ace5f-6257-436d-91ff-3462ecaa823f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003697519s
addons_test.go:590: (dbg) Run:  kubectl --context addons-877061 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-877061 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-877061 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-877061 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-877061 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-877061 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-877061 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [61b742a4-4a64-465f-b69f-0f2c844a0ac1] Pending
helpers_test.go:344: "task-pv-pod-restore" [61b742a4-4a64-465f-b69f-0f2c844a0ac1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [61b742a4-4a64-465f-b69f-0f2c844a0ac1] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003965482s
addons_test.go:632: (dbg) Run:  kubectl --context addons-877061 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-877061 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-877061 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-877061 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.695263017s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-877061 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-tb7kb" [f4f3f110-fae5-444b-864c-acb1f3344c60] Pending
helpers_test.go:344: "headlamp-7867546754-tb7kb" [f4f3f110-fae5-444b-864c-acb1f3344c60] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-tb7kb" [f4f3f110-fae5-444b-864c-acb1f3344c60] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004168078s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-877061 addons disable headlamp --alsologtostderr -v=1: (5.609426055s)
--- PASS: TestAddons/parallel/Headlamp (18.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-58868" [16eff1c8-f865-4c67-b90e-2867dcb438a5] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003835265s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-877061
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-877061 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-877061 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877061 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9c3a5eeb-2ae5-448d-85e3-4e0e9f2ea812] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9c3a5eeb-2ae5-448d-85e3-4e0e9f2ea812] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9c3a5eeb-2ae5-448d-85e3-4e0e9f2ea812] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00392853s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-877061 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 ssh "cat /opt/local-path-provisioner/pvc-dc514d6f-0e3d-4ea7-a5f8-6c9da90dff2a_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-877061 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-877061 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5kbf8" [c837ef00-57b2-4111-8588-1b47358c0549] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004912304s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-877061
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-bnzjr" [5352a604-6f0e-4003-8eb5-29b67185f096] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003978737s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-877061 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-877061 addons disable yakd --alsologtostderr -v=1: (5.770339596s)
--- PASS: TestAddons/parallel/Yakd (10.78s)

                                                
                                    
x
+
TestCertOptions (63.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-425308 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-425308 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m2.053520821s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-425308 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-425308 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-425308 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-425308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-425308
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-425308: (1.040721174s)
--- PASS: TestCertOptions (63.54s)

                                                
                                    
x
+
TestCertExpiration (285.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-238338 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-238338 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m4.521640465s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-238338 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-238338 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.614306859s)
helpers_test.go:175: Cleaning up "cert-expiration-238338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-238338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-238338: (1.063109168s)
--- PASS: TestCertExpiration (285.20s)

                                                
                                    
x
+
TestForceSystemdFlag (82.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-406944 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-406944 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.52413754s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-406944 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-406944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-406944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-406944: (1.176882795s)
--- PASS: TestForceSystemdFlag (82.12s)

                                                
                                    
x
+
TestForceSystemdEnv (69.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-127493 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-127493 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.477211671s)
helpers_test.go:175: Cleaning up "force-systemd-env-127493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-127493
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-127493: (1.02818464s)
--- PASS: TestForceSystemdEnv (69.51s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.99s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.99s)

                                                
                                    
x
+
TestErrorSpam/setup (39.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-961133 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-961133 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-961133 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-961133 --driver=kvm2  --container-runtime=crio: (39.451708065s)
--- PASS: TestErrorSpam/setup (39.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (4.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 stop: (1.472186392s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 stop: (1.453725408s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-961133 --log_dir /tmp/nospam-961133 stop: (1.231766188s)
--- PASS: TestErrorSpam/stop (4.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19360-1093692/.minikube/files/etc/test/nested/copy/1100976/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110390 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0731 20:22:00.018491 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:00.024589 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:00.034834 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:00.055153 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:00.095487 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:00.175782 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:00.336225 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:00.656826 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:01.297301 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:02.577791 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:05.138056 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:10.258846 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:20.499759 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:22:40.980148 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-110390 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m3.618654972s)
--- PASS: TestFunctional/serial/StartWithProxy (63.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110390 --alsologtostderr -v=8
E0731 20:23:21.940977 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-110390 --alsologtostderr -v=8: (36.16885092s)
functional_test.go:659: soft start took 36.169567432s for "functional-110390" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-110390 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 cache add registry.k8s.io/pause:3.1: (1.03077093s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 cache add registry.k8s.io/pause:3.3: (1.094625933s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 cache add registry.k8s.io/pause:latest: (1.102241729s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-110390 /tmp/TestFunctionalserialCacheCmdcacheadd_local975657434/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cache add minikube-local-cache-test:functional-110390
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 cache add minikube-local-cache-test:functional-110390: (1.590155014s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cache delete minikube-local-cache-test:functional-110390
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-110390
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.052038ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 kubectl -- --context functional-110390 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-110390 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110390 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-110390 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.854961476s)
functional_test.go:757: restart took 36.855082013s for "functional-110390" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-110390 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 logs: (1.361043464s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 logs --file /tmp/TestFunctionalserialLogsFileCmd2367063194/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 logs --file /tmp/TestFunctionalserialLogsFileCmd2367063194/001/logs.txt: (1.37028531s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-110390 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-110390
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-110390: exit status 115 (275.029833ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.234:31211 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-110390 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 config get cpus: exit status 14 (50.972723ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 config get cpus: exit status 14 (57.142144ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-110390 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-110390 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1110893: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110390 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-110390 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.660147ms)

                                                
                                                
-- stdout --
	* [functional-110390] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:24:47.927290 1110773 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:24:47.927548 1110773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:24:47.927559 1110773 out.go:304] Setting ErrFile to fd 2...
	I0731 20:24:47.927563 1110773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:24:47.927776 1110773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:24:47.928362 1110773 out.go:298] Setting JSON to false
	I0731 20:24:47.929484 1110773 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14839,"bootTime":1722442649,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:24:47.929555 1110773 start.go:139] virtualization: kvm guest
	I0731 20:24:47.931813 1110773 out.go:177] * [functional-110390] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:24:47.933348 1110773 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:24:47.933369 1110773 notify.go:220] Checking for updates...
	I0731 20:24:47.935868 1110773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:24:47.937217 1110773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:24:47.938674 1110773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:24:47.940051 1110773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:24:47.941436 1110773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:24:47.943029 1110773 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:24:47.943486 1110773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:24:47.943572 1110773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:24:47.959307 1110773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I0731 20:24:47.959799 1110773 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:24:47.960401 1110773 main.go:141] libmachine: Using API Version  1
	I0731 20:24:47.960423 1110773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:24:47.960894 1110773 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:24:47.961117 1110773 main.go:141] libmachine: (functional-110390) Calling .DriverName
	I0731 20:24:47.961402 1110773 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:24:47.961720 1110773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:24:47.961761 1110773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:24:47.982026 1110773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46511
	I0731 20:24:47.982517 1110773 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:24:47.983108 1110773 main.go:141] libmachine: Using API Version  1
	I0731 20:24:47.983145 1110773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:24:47.983522 1110773 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:24:47.983743 1110773 main.go:141] libmachine: (functional-110390) Calling .DriverName
	I0731 20:24:48.018910 1110773 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:24:48.020173 1110773 start.go:297] selected driver: kvm2
	I0731 20:24:48.020192 1110773 start.go:901] validating driver "kvm2" against &{Name:functional-110390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-110390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:24:48.020296 1110773 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:24:48.022422 1110773 out.go:177] 
	W0731 20:24:48.023790 1110773 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 20:24:48.024925 1110773 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110390 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-110390 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-110390 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.710801ms)

                                                
                                                
-- stdout --
	* [functional-110390] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:24:48.205031 1110829 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:24:48.205307 1110829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:24:48.205317 1110829 out.go:304] Setting ErrFile to fd 2...
	I0731 20:24:48.205322 1110829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:24:48.205598 1110829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:24:48.206127 1110829 out.go:298] Setting JSON to false
	I0731 20:24:48.207206 1110829 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14839,"bootTime":1722442649,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:24:48.207270 1110829 start.go:139] virtualization: kvm guest
	I0731 20:24:48.209283 1110829 out.go:177] * [functional-110390] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0731 20:24:48.210647 1110829 notify.go:220] Checking for updates...
	I0731 20:24:48.210661 1110829 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 20:24:48.212055 1110829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:24:48.213218 1110829 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 20:24:48.214295 1110829 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 20:24:48.215407 1110829 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:24:48.216614 1110829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:24:48.218128 1110829 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:24:48.218499 1110829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:24:48.218571 1110829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:24:48.234635 1110829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I0731 20:24:48.235054 1110829 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:24:48.235612 1110829 main.go:141] libmachine: Using API Version  1
	I0731 20:24:48.235629 1110829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:24:48.236003 1110829 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:24:48.236286 1110829 main.go:141] libmachine: (functional-110390) Calling .DriverName
	I0731 20:24:48.236548 1110829 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:24:48.236896 1110829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:24:48.236936 1110829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:24:48.252870 1110829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I0731 20:24:48.253415 1110829 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:24:48.253996 1110829 main.go:141] libmachine: Using API Version  1
	I0731 20:24:48.254016 1110829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:24:48.254356 1110829 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:24:48.254549 1110829 main.go:141] libmachine: (functional-110390) Calling .DriverName
	I0731 20:24:48.289671 1110829 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0731 20:24:48.291099 1110829 start.go:297] selected driver: kvm2
	I0731 20:24:48.291115 1110829 start.go:901] validating driver "kvm2" against &{Name:functional-110390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-110390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:24:48.291225 1110829 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:24:48.293175 1110829 out.go:177] 
	W0731 20:24:48.294402 1110829 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 20:24:48.295550 1110829 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-110390 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-110390 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-zzmtc" [ece05dd0-422b-4e80-b965-bf89b4d349aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-zzmtc" [ece05dd0-422b-4e80-b965-bf89b4d349aa] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004190525s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.234:30265
functional_test.go:1671: http://192.168.39.234:30265: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-zzmtc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.234:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.234:30265
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [673c5a47-3507-465f-9261-596e93a91fbf] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004197929s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-110390 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-110390 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-110390 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-110390 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-110390 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e5754b33-68ab-4f52-9800-6a13960e2039] Pending
helpers_test.go:344: "sp-pod" [e5754b33-68ab-4f52-9800-6a13960e2039] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e5754b33-68ab-4f52-9800-6a13960e2039] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00377919s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-110390 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-110390 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-110390 delete -f testdata/storage-provisioner/pod.yaml: (1.015989013s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-110390 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [74332eff-2af4-4ae4-85c6-cbd85345a381] Pending
helpers_test.go:344: "sp-pod" [74332eff-2af4-4ae4-85c6-cbd85345a381] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [74332eff-2af4-4ae4-85c6-cbd85345a381] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.005089699s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-110390 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh -n functional-110390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cp functional-110390:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd594729867/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh -n functional-110390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh -n functional-110390 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-110390 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-pgktg" [0389ee59-8ca3-4a29-8c9f-a0885fc6ffd9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-pgktg" [0389ee59-8ca3-4a29-8c9f-a0885fc6ffd9] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.006183731s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-110390 exec mysql-64454c8b5c-pgktg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-110390 exec mysql-64454c8b5c-pgktg -- mysql -ppassword -e "show databases;": exit status 1 (209.747298ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-110390 exec mysql-64454c8b5c-pgktg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-110390 exec mysql-64454c8b5c-pgktg -- mysql -ppassword -e "show databases;": exit status 1 (179.59513ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-110390 exec mysql-64454c8b5c-pgktg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1100976/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo cat /etc/test/nested/copy/1100976/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1100976.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo cat /etc/ssl/certs/1100976.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1100976.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo cat /usr/share/ca-certificates/1100976.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11009762.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo cat /etc/ssl/certs/11009762.pem"
E0731 20:24:43.861913 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11009762.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo cat /usr/share/ca-certificates/11009762.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-110390 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 ssh "sudo systemctl is-active docker": exit status 1 (239.639773ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 ssh "sudo systemctl is-active containerd": exit status 1 (234.087946ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110390 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-110390
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-110390
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110390 image ls --format short --alsologtostderr:
I0731 20:25:08.562287 1111492 out.go:291] Setting OutFile to fd 1 ...
I0731 20:25:08.562579 1111492 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:08.562588 1111492 out.go:304] Setting ErrFile to fd 2...
I0731 20:25:08.562593 1111492 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:08.562819 1111492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
I0731 20:25:08.563478 1111492 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:08.563610 1111492 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:08.563992 1111492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:08.564044 1111492 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:08.579627 1111492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
I0731 20:25:08.580084 1111492 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:08.580650 1111492 main.go:141] libmachine: Using API Version  1
I0731 20:25:08.580670 1111492 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:08.581065 1111492 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:08.581260 1111492 main.go:141] libmachine: (functional-110390) Calling .GetState
I0731 20:25:08.582880 1111492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:08.582922 1111492 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:08.597627 1111492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39631
I0731 20:25:08.598028 1111492 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:08.598438 1111492 main.go:141] libmachine: Using API Version  1
I0731 20:25:08.598460 1111492 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:08.598808 1111492 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:08.598969 1111492 main.go:141] libmachine: (functional-110390) Calling .DriverName
I0731 20:25:08.599155 1111492 ssh_runner.go:195] Run: systemctl --version
I0731 20:25:08.599184 1111492 main.go:141] libmachine: (functional-110390) Calling .GetSSHHostname
I0731 20:25:08.601688 1111492 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:08.602119 1111492 main.go:141] libmachine: (functional-110390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:cc:6e", ip: ""} in network mk-functional-110390: {Iface:virbr1 ExpiryTime:2024-07-31 21:22:13 +0000 UTC Type:0 Mac:52:54:00:34:cc:6e Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-110390 Clientid:01:52:54:00:34:cc:6e}
I0731 20:25:08.602158 1111492 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined IP address 192.168.39.234 and MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:08.602324 1111492 main.go:141] libmachine: (functional-110390) Calling .GetSSHPort
I0731 20:25:08.602488 1111492 main.go:141] libmachine: (functional-110390) Calling .GetSSHKeyPath
I0731 20:25:08.602635 1111492 main.go:141] libmachine: (functional-110390) Calling .GetSSHUsername
I0731 20:25:08.602752 1111492 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/functional-110390/id_rsa Username:docker}
I0731 20:25:08.686500 1111492 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 20:25:08.720956 1111492 main.go:141] libmachine: Making call to close driver server
I0731 20:25:08.720970 1111492 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:08.721309 1111492 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:08.721345 1111492 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 20:25:08.721344 1111492 main.go:141] libmachine: (functional-110390) DBG | Closing plugin on server side
I0731 20:25:08.721357 1111492 main.go:141] libmachine: Making call to close driver server
I0731 20:25:08.721367 1111492 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:08.721594 1111492 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:08.721617 1111492 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110390 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| localhost/minikube-local-cache-test     | functional-110390  | 1893509e41694 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| docker.io/kicbase/echo-server           | functional-110390  | 9056ab77afb8e | 4.94MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | alpine             | 1ae23480369fa | 45.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110390 image ls --format table --alsologtostderr:
I0731 20:25:09.238411 1111626 out.go:291] Setting OutFile to fd 1 ...
I0731 20:25:09.238562 1111626 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:09.238574 1111626 out.go:304] Setting ErrFile to fd 2...
I0731 20:25:09.238581 1111626 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:09.238810 1111626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
I0731 20:25:09.239428 1111626 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:09.239529 1111626 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:09.239883 1111626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:09.239930 1111626 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:09.255807 1111626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
I0731 20:25:09.256374 1111626 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:09.256973 1111626 main.go:141] libmachine: Using API Version  1
I0731 20:25:09.256997 1111626 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:09.257396 1111626 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:09.257675 1111626 main.go:141] libmachine: (functional-110390) Calling .GetState
I0731 20:25:09.259672 1111626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:09.259727 1111626 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:09.274800 1111626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
I0731 20:25:09.275239 1111626 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:09.275733 1111626 main.go:141] libmachine: Using API Version  1
I0731 20:25:09.275759 1111626 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:09.276106 1111626 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:09.276345 1111626 main.go:141] libmachine: (functional-110390) Calling .DriverName
I0731 20:25:09.276605 1111626 ssh_runner.go:195] Run: systemctl --version
I0731 20:25:09.276638 1111626 main.go:141] libmachine: (functional-110390) Calling .GetSSHHostname
I0731 20:25:09.279398 1111626 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:09.279846 1111626 main.go:141] libmachine: (functional-110390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:cc:6e", ip: ""} in network mk-functional-110390: {Iface:virbr1 ExpiryTime:2024-07-31 21:22:13 +0000 UTC Type:0 Mac:52:54:00:34:cc:6e Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-110390 Clientid:01:52:54:00:34:cc:6e}
I0731 20:25:09.279880 1111626 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined IP address 192.168.39.234 and MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:09.280004 1111626 main.go:141] libmachine: (functional-110390) Calling .GetSSHPort
I0731 20:25:09.280224 1111626 main.go:141] libmachine: (functional-110390) Calling .GetSSHKeyPath
I0731 20:25:09.280464 1111626 main.go:141] libmachine: (functional-110390) Calling .GetSSHUsername
I0731 20:25:09.280630 1111626 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/functional-110390/id_rsa Username:docker}
I0731 20:25:09.366071 1111626 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 20:25:09.401636 1111626 main.go:141] libmachine: Making call to close driver server
I0731 20:25:09.401658 1111626 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:09.401991 1111626 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:09.402033 1111626 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 20:25:09.402042 1111626 main.go:141] libmachine: Making call to close driver server
I0731 20:25:09.402048 1111626 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:09.401992 1111626 main.go:141] libmachine: (functional-110390) DBG | Closing plugin on server side
I0731 20:25:09.402351 1111626 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:09.402368 1111626 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110390 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e4
00542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube
-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c
828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664
f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"1893509e41694994eac3683ee5da85cdf5d079f157283bab6105f0c3a41c73a2","repoDigests":["localhost/minikube-local-cache-test@sha256:249633e8cc951f246c30048e354884c82ac648002aaa45686c0bb676228476a8"],"repoTags":["localhost/minikube-local-cache-test:functional-110390"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"
,"repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737
c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-110390"],"size":"4943877"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9","docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45068794"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110390 image ls --format json --alsologtostderr:
I0731 20:25:09.018034 1111579 out.go:291] Setting OutFile to fd 1 ...
I0731 20:25:09.018465 1111579 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:09.018483 1111579 out.go:304] Setting ErrFile to fd 2...
I0731 20:25:09.018491 1111579 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:09.018979 1111579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
I0731 20:25:09.019881 1111579 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:09.019982 1111579 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:09.020404 1111579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:09.020445 1111579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:09.035772 1111579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
I0731 20:25:09.036243 1111579 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:09.036833 1111579 main.go:141] libmachine: Using API Version  1
I0731 20:25:09.036872 1111579 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:09.037208 1111579 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:09.037404 1111579 main.go:141] libmachine: (functional-110390) Calling .GetState
I0731 20:25:09.039338 1111579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:09.039395 1111579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:09.055160 1111579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33689
I0731 20:25:09.055573 1111579 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:09.056159 1111579 main.go:141] libmachine: Using API Version  1
I0731 20:25:09.056188 1111579 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:09.056598 1111579 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:09.056804 1111579 main.go:141] libmachine: (functional-110390) Calling .DriverName
I0731 20:25:09.057016 1111579 ssh_runner.go:195] Run: systemctl --version
I0731 20:25:09.057047 1111579 main.go:141] libmachine: (functional-110390) Calling .GetSSHHostname
I0731 20:25:09.060418 1111579 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:09.060890 1111579 main.go:141] libmachine: (functional-110390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:cc:6e", ip: ""} in network mk-functional-110390: {Iface:virbr1 ExpiryTime:2024-07-31 21:22:13 +0000 UTC Type:0 Mac:52:54:00:34:cc:6e Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-110390 Clientid:01:52:54:00:34:cc:6e}
I0731 20:25:09.060922 1111579 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined IP address 192.168.39.234 and MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:09.061139 1111579 main.go:141] libmachine: (functional-110390) Calling .GetSSHPort
I0731 20:25:09.061324 1111579 main.go:141] libmachine: (functional-110390) Calling .GetSSHKeyPath
I0731 20:25:09.061502 1111579 main.go:141] libmachine: (functional-110390) Calling .GetSSHUsername
I0731 20:25:09.061653 1111579 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/functional-110390/id_rsa Username:docker}
I0731 20:25:09.146518 1111579 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 20:25:09.188505 1111579 main.go:141] libmachine: Making call to close driver server
I0731 20:25:09.188524 1111579 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:09.188860 1111579 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:09.188879 1111579 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 20:25:09.188892 1111579 main.go:141] libmachine: Making call to close driver server
I0731 20:25:09.188892 1111579 main.go:141] libmachine: (functional-110390) DBG | Closing plugin on server side
I0731 20:25:09.188903 1111579 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:09.189141 1111579 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:09.189161 1111579 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 20:25:09.189161 1111579 main.go:141] libmachine: (functional-110390) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110390 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-110390
size: "4943877"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
- docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57
repoTags:
- docker.io/library/nginx:alpine
size: "45068794"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 1893509e41694994eac3683ee5da85cdf5d079f157283bab6105f0c3a41c73a2
repoDigests:
- localhost/minikube-local-cache-test@sha256:249633e8cc951f246c30048e354884c82ac648002aaa45686c0bb676228476a8
repoTags:
- localhost/minikube-local-cache-test:functional-110390
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110390 image ls --format yaml --alsologtostderr:
I0731 20:25:08.770776 1111526 out.go:291] Setting OutFile to fd 1 ...
I0731 20:25:08.770903 1111526 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:08.770912 1111526 out.go:304] Setting ErrFile to fd 2...
I0731 20:25:08.770917 1111526 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:08.771078 1111526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
I0731 20:25:08.771662 1111526 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:08.771780 1111526 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:08.772226 1111526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:08.772283 1111526 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:08.787657 1111526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
I0731 20:25:08.788180 1111526 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:08.788818 1111526 main.go:141] libmachine: Using API Version  1
I0731 20:25:08.788847 1111526 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:08.789628 1111526 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:08.790013 1111526 main.go:141] libmachine: (functional-110390) Calling .GetState
I0731 20:25:08.792209 1111526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:08.792264 1111526 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:08.807372 1111526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
I0731 20:25:08.807775 1111526 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:08.808277 1111526 main.go:141] libmachine: Using API Version  1
I0731 20:25:08.808304 1111526 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:08.808648 1111526 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:08.808866 1111526 main.go:141] libmachine: (functional-110390) Calling .DriverName
I0731 20:25:08.809093 1111526 ssh_runner.go:195] Run: systemctl --version
I0731 20:25:08.809120 1111526 main.go:141] libmachine: (functional-110390) Calling .GetSSHHostname
I0731 20:25:08.811560 1111526 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:08.811922 1111526 main.go:141] libmachine: (functional-110390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:cc:6e", ip: ""} in network mk-functional-110390: {Iface:virbr1 ExpiryTime:2024-07-31 21:22:13 +0000 UTC Type:0 Mac:52:54:00:34:cc:6e Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-110390 Clientid:01:52:54:00:34:cc:6e}
I0731 20:25:08.811949 1111526 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined IP address 192.168.39.234 and MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:08.812097 1111526 main.go:141] libmachine: (functional-110390) Calling .GetSSHPort
I0731 20:25:08.812253 1111526 main.go:141] libmachine: (functional-110390) Calling .GetSSHKeyPath
I0731 20:25:08.812386 1111526 main.go:141] libmachine: (functional-110390) Calling .GetSSHUsername
I0731 20:25:08.812527 1111526 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/functional-110390/id_rsa Username:docker}
I0731 20:25:08.926432 1111526 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 20:25:08.967586 1111526 main.go:141] libmachine: Making call to close driver server
I0731 20:25:08.967604 1111526 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:08.967887 1111526 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:08.967914 1111526 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 20:25:08.967923 1111526 main.go:141] libmachine: Making call to close driver server
I0731 20:25:08.967930 1111526 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:08.968163 1111526 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:08.968191 1111526 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 ssh pgrep buildkitd: exit status 1 (201.596579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image build -t localhost/my-image:functional-110390 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 image build -t localhost/my-image:functional-110390 testdata/build --alsologtostderr: (2.644528609s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-110390 image build -t localhost/my-image:functional-110390 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 213d9777906
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-110390
--> e1fa5e38f78
Successfully tagged localhost/my-image:functional-110390
e1fa5e38f789e529737915bb87009a4b5d9819b4fa345887f0a9761f0966ea07
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-110390 image build -t localhost/my-image:functional-110390 testdata/build --alsologtostderr:
I0731 20:25:09.097222 1111603 out.go:291] Setting OutFile to fd 1 ...
I0731 20:25:09.097484 1111603 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:09.097495 1111603 out.go:304] Setting ErrFile to fd 2...
I0731 20:25:09.097499 1111603 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 20:25:09.097722 1111603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
I0731 20:25:09.098291 1111603 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:09.098899 1111603 config.go:182] Loaded profile config "functional-110390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 20:25:09.099276 1111603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:09.099321 1111603 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:09.114691 1111603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33413
I0731 20:25:09.115135 1111603 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:09.115710 1111603 main.go:141] libmachine: Using API Version  1
I0731 20:25:09.115793 1111603 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:09.116177 1111603 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:09.116359 1111603 main.go:141] libmachine: (functional-110390) Calling .GetState
I0731 20:25:09.118166 1111603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 20:25:09.118201 1111603 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 20:25:09.133019 1111603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
I0731 20:25:09.133485 1111603 main.go:141] libmachine: () Calling .GetVersion
I0731 20:25:09.133988 1111603 main.go:141] libmachine: Using API Version  1
I0731 20:25:09.134015 1111603 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 20:25:09.134309 1111603 main.go:141] libmachine: () Calling .GetMachineName
I0731 20:25:09.134502 1111603 main.go:141] libmachine: (functional-110390) Calling .DriverName
I0731 20:25:09.134701 1111603 ssh_runner.go:195] Run: systemctl --version
I0731 20:25:09.134730 1111603 main.go:141] libmachine: (functional-110390) Calling .GetSSHHostname
I0731 20:25:09.137457 1111603 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:09.137878 1111603 main.go:141] libmachine: (functional-110390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:cc:6e", ip: ""} in network mk-functional-110390: {Iface:virbr1 ExpiryTime:2024-07-31 21:22:13 +0000 UTC Type:0 Mac:52:54:00:34:cc:6e Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:functional-110390 Clientid:01:52:54:00:34:cc:6e}
I0731 20:25:09.137907 1111603 main.go:141] libmachine: (functional-110390) DBG | domain functional-110390 has defined IP address 192.168.39.234 and MAC address 52:54:00:34:cc:6e in network mk-functional-110390
I0731 20:25:09.138053 1111603 main.go:141] libmachine: (functional-110390) Calling .GetSSHPort
I0731 20:25:09.138231 1111603 main.go:141] libmachine: (functional-110390) Calling .GetSSHKeyPath
I0731 20:25:09.138403 1111603 main.go:141] libmachine: (functional-110390) Calling .GetSSHUsername
I0731 20:25:09.138525 1111603 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/functional-110390/id_rsa Username:docker}
I0731 20:25:09.226797 1111603 build_images.go:161] Building image from path: /tmp/build.1332975174.tar
I0731 20:25:09.226872 1111603 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 20:25:09.238615 1111603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1332975174.tar
I0731 20:25:09.242692 1111603 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1332975174.tar: stat -c "%s %y" /var/lib/minikube/build/build.1332975174.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1332975174.tar': No such file or directory
I0731 20:25:09.242721 1111603 ssh_runner.go:362] scp /tmp/build.1332975174.tar --> /var/lib/minikube/build/build.1332975174.tar (3072 bytes)
I0731 20:25:09.267294 1111603 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1332975174
I0731 20:25:09.276570 1111603 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1332975174 -xf /var/lib/minikube/build/build.1332975174.tar
I0731 20:25:09.289127 1111603 crio.go:315] Building image: /var/lib/minikube/build/build.1332975174
I0731 20:25:09.289197 1111603 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-110390 /var/lib/minikube/build/build.1332975174 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0731 20:25:11.660614 1111603 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-110390 /var/lib/minikube/build/build.1332975174 --cgroup-manager=cgroupfs: (2.371375297s)
I0731 20:25:11.660719 1111603 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1332975174
I0731 20:25:11.675208 1111603 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1332975174.tar
I0731 20:25:11.691604 1111603 build_images.go:217] Built localhost/my-image:functional-110390 from /tmp/build.1332975174.tar
I0731 20:25:11.691641 1111603 build_images.go:133] succeeded building to: functional-110390
I0731 20:25:11.691647 1111603 build_images.go:134] failed building to: 
I0731 20:25:11.691680 1111603 main.go:141] libmachine: Making call to close driver server
I0731 20:25:11.691694 1111603 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:11.691997 1111603 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:11.692016 1111603 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 20:25:11.692025 1111603 main.go:141] libmachine: Making call to close driver server
I0731 20:25:11.692033 1111603 main.go:141] libmachine: (functional-110390) Calling .Close
I0731 20:25:11.692038 1111603 main.go:141] libmachine: (functional-110390) DBG | Closing plugin on server side
I0731 20:25:11.692366 1111603 main.go:141] libmachine: (functional-110390) DBG | Closing plugin on server side
I0731 20:25:11.692384 1111603 main.go:141] libmachine: Successfully made call to close driver server
I0731 20:25:11.692422 1111603 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.51006748s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-110390
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-110390 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-110390 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ppdng" [ed37d39f-f5f6-4e7e-916c-ae346bf65ef5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ppdng" [ed37d39f-f5f6-4e7e-916c-ae346bf65ef5] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004965664s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-110390 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-110390 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-110390 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-110390 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1109295: os: process already finished
helpers_test.go:502: unable to terminate pid 1109303: os: process already finished
helpers_test.go:502: unable to terminate pid 1109341: os: process already finished
helpers_test.go:508: unable to kill pid 1109265: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-110390 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-110390 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fab13f48-5c75-4e28-98fe-773df77af34e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fab13f48-5c75-4e28-98fe-773df77af34e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.003676388s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image load --daemon docker.io/kicbase/echo-server:functional-110390 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 image load --daemon docker.io/kicbase/echo-server:functional-110390 --alsologtostderr: (2.53944333s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image load --daemon docker.io/kicbase/echo-server:functional-110390 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-110390
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image load --daemon docker.io/kicbase/echo-server:functional-110390 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-110390 image load --daemon docker.io/kicbase/echo-server:functional-110390 --alsologtostderr: (1.372849896s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image save docker.io/kicbase/echo-server:functional-110390 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image rm docker.io/kicbase/echo-server:functional-110390 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-110390
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 image save --daemon docker.io/kicbase/echo-server:functional-110390 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-110390
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 service list -o json
functional_test.go:1490: Took "441.299782ms" to run "out/minikube-linux-amd64 -p functional-110390 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.234:31023
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.234:31023
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 update-context --alsologtostderr -v=2
2024/07/31 20:25:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-110390 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.197.236 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-110390 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdany-port417996061/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722457485736029144" to /tmp/TestFunctionalparallelMountCmdany-port417996061/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722457485736029144" to /tmp/TestFunctionalparallelMountCmdany-port417996061/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722457485736029144" to /tmp/TestFunctionalparallelMountCmdany-port417996061/001/test-1722457485736029144
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.034442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 20:24 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 20:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 20:24 test-1722457485736029144
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh cat /mount-9p/test-1722457485736029144
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-110390 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [235370b9-6a2b-4e79-a7d2-d42b40657f6e] Pending
helpers_test.go:344: "busybox-mount" [235370b9-6a2b-4e79-a7d2-d42b40657f6e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [235370b9-6a2b-4e79-a7d2-d42b40657f6e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [235370b9-6a2b-4e79-a7d2-d42b40657f6e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.003418902s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-110390 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdany-port417996061/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "246.493027ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "51.652221ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "258.93001ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "53.402836ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdspecific-port2370817977/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (267.666902ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdspecific-port2370817977/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 ssh "sudo umount -f /mount-9p": exit status 1 (226.657768ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-110390 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdspecific-port2370817977/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4094832051/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4094832051/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4094832051/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T" /mount1: exit status 1 (253.509266ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-110390 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-110390 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4094832051/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4094832051/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-110390 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4094832051/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-110390
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-110390
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-110390
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-430887 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 20:27:00.018687 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 20:27:27.702204 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-430887 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m26.556520303s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (207.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-430887 -- rollout status deployment/busybox: (3.285134939s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-hhwcx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-lt5n8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-tkmzn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-hhwcx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-lt5n8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-tkmzn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-hhwcx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-lt5n8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-tkmzn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-hhwcx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-hhwcx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-lt5n8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-lt5n8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-tkmzn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-430887 -- exec busybox-fc5497c4f-tkmzn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-430887 -v=7 --alsologtostderr
E0731 20:29:31.357471 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:31.362748 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:31.373069 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:31.393347 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:31.433628 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:31.513984 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:31.674279 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:31.994441 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:32.634624 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:33.914801 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:36.475384 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:29:41.596331 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-430887 -v=7 --alsologtostderr: (53.543107798s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-430887 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp testdata/cp-test.txt ha-430887:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887:/home/docker/cp-test.txt ha-430887-m02:/home/docker/cp-test_ha-430887_ha-430887-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m02 "sudo cat /home/docker/cp-test_ha-430887_ha-430887-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887:/home/docker/cp-test.txt ha-430887-m03:/home/docker/cp-test_ha-430887_ha-430887-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m03 "sudo cat /home/docker/cp-test_ha-430887_ha-430887-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887:/home/docker/cp-test.txt ha-430887-m04:/home/docker/cp-test_ha-430887_ha-430887-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m04 "sudo cat /home/docker/cp-test_ha-430887_ha-430887-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp testdata/cp-test.txt ha-430887-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m02 "sudo cat /home/docker/cp-test.txt"
E0731 20:29:51.837208 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m02:/home/docker/cp-test.txt ha-430887:/home/docker/cp-test_ha-430887-m02_ha-430887.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887 "sudo cat /home/docker/cp-test_ha-430887-m02_ha-430887.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m02:/home/docker/cp-test.txt ha-430887-m03:/home/docker/cp-test_ha-430887-m02_ha-430887-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m03 "sudo cat /home/docker/cp-test_ha-430887-m02_ha-430887-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m02:/home/docker/cp-test.txt ha-430887-m04:/home/docker/cp-test_ha-430887-m02_ha-430887-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m04 "sudo cat /home/docker/cp-test_ha-430887-m02_ha-430887-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp testdata/cp-test.txt ha-430887-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt ha-430887:/home/docker/cp-test_ha-430887-m03_ha-430887.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887 "sudo cat /home/docker/cp-test_ha-430887-m03_ha-430887.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt ha-430887-m02:/home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m02 "sudo cat /home/docker/cp-test_ha-430887-m03_ha-430887-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m03:/home/docker/cp-test.txt ha-430887-m04:/home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m04 "sudo cat /home/docker/cp-test_ha-430887-m03_ha-430887-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp testdata/cp-test.txt ha-430887-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3671382305/001/cp-test_ha-430887-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt ha-430887:/home/docker/cp-test_ha-430887-m04_ha-430887.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887 "sudo cat /home/docker/cp-test_ha-430887-m04_ha-430887.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt ha-430887-m02:/home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m02 "sudo cat /home/docker/cp-test_ha-430887-m04_ha-430887-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 cp ha-430887-m04:/home/docker/cp-test.txt ha-430887-m03:/home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 ssh -n ha-430887-m03 "sudo cat /home/docker/cp-test_ha-430887-m04_ha-430887-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.482110732s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-430887 node delete m03 -v=7 --alsologtostderr: (16.139234178s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (340.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-430887 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 20:44:31.358233 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:45:54.401503 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:47:00.018921 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-430887 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m39.507057385s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (340.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-430887 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-430887 --control-plane -v=7 --alsologtostderr: (1m11.894616626s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-430887 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-319368 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0731 20:49:31.358158 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-319368 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.519974583s)
--- PASS: TestJSONOutput/start/Command (55.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-319368 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-319368 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-319368 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-319368 --output=json --user=testUser: (6.624599711s)
--- PASS: TestJSONOutput/stop/Command (6.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-163855 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-163855 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.853097ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b64b58a-0d3e-4be4-9553-f83a5a60be50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-163855] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9bffd3ae-4e36-46c7-80c3-d5f784d32ece","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19360"}}
	{"specversion":"1.0","id":"f1cfa7d9-ebe8-49ba-8067-8c148931fdc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e037bfe2-f960-4e27-b3c1-06b843a21e1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig"}}
	{"specversion":"1.0","id":"5c25c3a8-3cde-4f35-9dfc-c0df6981861a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube"}}
	{"specversion":"1.0","id":"7e4873e6-0f56-4c5b-9c9c-6e57982c603c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"773ae776-e2c1-4310-a0ae-33f58f733765","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f17b0761-62e9-4e74-9ade-24bac25a2676","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-163855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-163855
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (85.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-201768 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-201768 --driver=kvm2  --container-runtime=crio: (39.110267505s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-204185 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-204185 --driver=kvm2  --container-runtime=crio: (43.289251471s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-201768
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-204185
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-204185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-204185
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-204185: (1.034461326s)
helpers_test.go:175: Cleaning up "first-201768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-201768
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-201768: (1.032942043s)
--- PASS: TestMinikubeProfile (85.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-973451 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0731 20:52:00.018542 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-973451 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.02770834s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-973451 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-973451 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-004468 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-004468 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.226771642s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004468 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004468 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.93s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-973451 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004468 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004468 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-004468
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-004468: (1.285769909s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-004468
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-004468: (22.067114177s)
--- PASS: TestMountStart/serial/RestartStopped (23.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004468 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004468 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (120.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-220043 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 20:54:31.357765 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 20:55:03.063648 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-220043 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m0.23227521s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (120.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-220043 -- rollout status deployment/busybox: (3.038245013s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-6q6qp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-8tqdr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-6q6qp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-8tqdr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-6q6qp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-8tqdr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-6q6qp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-6q6qp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-8tqdr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-220043 -- exec busybox-fc5497c4f-8tqdr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-220043 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-220043 -v 3 --alsologtostderr: (47.927998568s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-220043 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp testdata/cp-test.txt multinode-220043:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3543853040/001/cp-test_multinode-220043.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043:/home/docker/cp-test.txt multinode-220043-m02:/home/docker/cp-test_multinode-220043_multinode-220043-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m02 "sudo cat /home/docker/cp-test_multinode-220043_multinode-220043-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043:/home/docker/cp-test.txt multinode-220043-m03:/home/docker/cp-test_multinode-220043_multinode-220043-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m03 "sudo cat /home/docker/cp-test_multinode-220043_multinode-220043-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp testdata/cp-test.txt multinode-220043-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3543853040/001/cp-test_multinode-220043-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043-m02:/home/docker/cp-test.txt multinode-220043:/home/docker/cp-test_multinode-220043-m02_multinode-220043.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043 "sudo cat /home/docker/cp-test_multinode-220043-m02_multinode-220043.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043-m02:/home/docker/cp-test.txt multinode-220043-m03:/home/docker/cp-test_multinode-220043-m02_multinode-220043-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m03 "sudo cat /home/docker/cp-test_multinode-220043-m02_multinode-220043-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp testdata/cp-test.txt multinode-220043-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3543853040/001/cp-test_multinode-220043-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt multinode-220043:/home/docker/cp-test_multinode-220043-m03_multinode-220043.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043 "sudo cat /home/docker/cp-test_multinode-220043-m03_multinode-220043.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 cp multinode-220043-m03:/home/docker/cp-test.txt multinode-220043-m02:/home/docker/cp-test_multinode-220043-m03_multinode-220043-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 ssh -n multinode-220043-m02 "sudo cat /home/docker/cp-test_multinode-220043-m03_multinode-220043-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-220043 node stop m03: (1.40599076s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-220043 status: exit status 7 (424.630318ms)

                                                
                                                
-- stdout --
	multinode-220043
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-220043-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-220043-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-220043 status --alsologtostderr: exit status 7 (430.961438ms)

                                                
                                                
-- stdout --
	multinode-220043
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-220043-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-220043-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:56:22.468065 1129137 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:56:22.468491 1129137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:56:22.468552 1129137 out.go:304] Setting ErrFile to fd 2...
	I0731 20:56:22.468566 1129137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:56:22.469046 1129137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 20:56:22.469512 1129137 out.go:298] Setting JSON to false
	I0731 20:56:22.469550 1129137 mustload.go:65] Loading cluster: multinode-220043
	I0731 20:56:22.469634 1129137 notify.go:220] Checking for updates...
	I0731 20:56:22.469940 1129137 config.go:182] Loaded profile config "multinode-220043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:56:22.469956 1129137 status.go:255] checking status of multinode-220043 ...
	I0731 20:56:22.470322 1129137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:56:22.470370 1129137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:56:22.491119 1129137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I0731 20:56:22.491599 1129137 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:56:22.492277 1129137 main.go:141] libmachine: Using API Version  1
	I0731 20:56:22.492302 1129137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:56:22.492726 1129137 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:56:22.492917 1129137 main.go:141] libmachine: (multinode-220043) Calling .GetState
	I0731 20:56:22.494658 1129137 status.go:330] multinode-220043 host status = "Running" (err=<nil>)
	I0731 20:56:22.494681 1129137 host.go:66] Checking if "multinode-220043" exists ...
	I0731 20:56:22.494995 1129137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:56:22.495038 1129137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:56:22.510810 1129137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0731 20:56:22.511377 1129137 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:56:22.511920 1129137 main.go:141] libmachine: Using API Version  1
	I0731 20:56:22.511950 1129137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:56:22.512313 1129137 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:56:22.512527 1129137 main.go:141] libmachine: (multinode-220043) Calling .GetIP
	I0731 20:56:22.515231 1129137 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:56:22.515639 1129137 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:56:22.515677 1129137 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:56:22.515809 1129137 host.go:66] Checking if "multinode-220043" exists ...
	I0731 20:56:22.516135 1129137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:56:22.516179 1129137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:56:22.532114 1129137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I0731 20:56:22.532880 1129137 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:56:22.533424 1129137 main.go:141] libmachine: Using API Version  1
	I0731 20:56:22.533446 1129137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:56:22.533835 1129137 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:56:22.534051 1129137 main.go:141] libmachine: (multinode-220043) Calling .DriverName
	I0731 20:56:22.534264 1129137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:56:22.534309 1129137 main.go:141] libmachine: (multinode-220043) Calling .GetSSHHostname
	I0731 20:56:22.537368 1129137 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:56:22.537825 1129137 main.go:141] libmachine: (multinode-220043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:33:33", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:53:33 +0000 UTC Type:0 Mac:52:54:00:cc:33:33 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-220043 Clientid:01:52:54:00:cc:33:33}
	I0731 20:56:22.537856 1129137 main.go:141] libmachine: (multinode-220043) DBG | domain multinode-220043 has defined IP address 192.168.39.184 and MAC address 52:54:00:cc:33:33 in network mk-multinode-220043
	I0731 20:56:22.537999 1129137 main.go:141] libmachine: (multinode-220043) Calling .GetSSHPort
	I0731 20:56:22.538188 1129137 main.go:141] libmachine: (multinode-220043) Calling .GetSSHKeyPath
	I0731 20:56:22.538352 1129137 main.go:141] libmachine: (multinode-220043) Calling .GetSSHUsername
	I0731 20:56:22.538508 1129137 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043/id_rsa Username:docker}
	I0731 20:56:22.619099 1129137 ssh_runner.go:195] Run: systemctl --version
	I0731 20:56:22.624946 1129137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:56:22.639246 1129137 kubeconfig.go:125] found "multinode-220043" server: "https://192.168.39.184:8443"
	I0731 20:56:22.639281 1129137 api_server.go:166] Checking apiserver status ...
	I0731 20:56:22.639326 1129137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:56:22.655714 1129137 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1119/cgroup
	W0731 20:56:22.665578 1129137 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1119/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:56:22.665645 1129137 ssh_runner.go:195] Run: ls
	I0731 20:56:22.669762 1129137 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0731 20:56:22.674280 1129137 api_server.go:279] https://192.168.39.184:8443/healthz returned 200:
	ok
	I0731 20:56:22.674305 1129137 status.go:422] multinode-220043 apiserver status = Running (err=<nil>)
	I0731 20:56:22.674316 1129137 status.go:257] multinode-220043 status: &{Name:multinode-220043 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:56:22.674339 1129137 status.go:255] checking status of multinode-220043-m02 ...
	I0731 20:56:22.674657 1129137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:56:22.674683 1129137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:56:22.690729 1129137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37615
	I0731 20:56:22.691169 1129137 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:56:22.691658 1129137 main.go:141] libmachine: Using API Version  1
	I0731 20:56:22.691684 1129137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:56:22.692004 1129137 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:56:22.692250 1129137 main.go:141] libmachine: (multinode-220043-m02) Calling .GetState
	I0731 20:56:22.693791 1129137 status.go:330] multinode-220043-m02 host status = "Running" (err=<nil>)
	I0731 20:56:22.693812 1129137 host.go:66] Checking if "multinode-220043-m02" exists ...
	I0731 20:56:22.694162 1129137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:56:22.694202 1129137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:56:22.710173 1129137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0731 20:56:22.710631 1129137 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:56:22.711083 1129137 main.go:141] libmachine: Using API Version  1
	I0731 20:56:22.711109 1129137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:56:22.711434 1129137 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:56:22.711627 1129137 main.go:141] libmachine: (multinode-220043-m02) Calling .GetIP
	I0731 20:56:22.714206 1129137 main.go:141] libmachine: (multinode-220043-m02) DBG | domain multinode-220043-m02 has defined MAC address 52:54:00:59:ab:db in network mk-multinode-220043
	I0731 20:56:22.714576 1129137 main.go:141] libmachine: (multinode-220043-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:ab:db", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:54:43 +0000 UTC Type:0 Mac:52:54:00:59:ab:db Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-220043-m02 Clientid:01:52:54:00:59:ab:db}
	I0731 20:56:22.714602 1129137 main.go:141] libmachine: (multinode-220043-m02) DBG | domain multinode-220043-m02 has defined IP address 192.168.39.193 and MAC address 52:54:00:59:ab:db in network mk-multinode-220043
	I0731 20:56:22.714759 1129137 host.go:66] Checking if "multinode-220043-m02" exists ...
	I0731 20:56:22.715091 1129137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:56:22.715133 1129137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:56:22.730728 1129137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0731 20:56:22.731201 1129137 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:56:22.731666 1129137 main.go:141] libmachine: Using API Version  1
	I0731 20:56:22.731689 1129137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:56:22.732010 1129137 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:56:22.732202 1129137 main.go:141] libmachine: (multinode-220043-m02) Calling .DriverName
	I0731 20:56:22.732387 1129137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:56:22.732407 1129137 main.go:141] libmachine: (multinode-220043-m02) Calling .GetSSHHostname
	I0731 20:56:22.735172 1129137 main.go:141] libmachine: (multinode-220043-m02) DBG | domain multinode-220043-m02 has defined MAC address 52:54:00:59:ab:db in network mk-multinode-220043
	I0731 20:56:22.735593 1129137 main.go:141] libmachine: (multinode-220043-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:ab:db", ip: ""} in network mk-multinode-220043: {Iface:virbr1 ExpiryTime:2024-07-31 21:54:43 +0000 UTC Type:0 Mac:52:54:00:59:ab:db Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-220043-m02 Clientid:01:52:54:00:59:ab:db}
	I0731 20:56:22.735622 1129137 main.go:141] libmachine: (multinode-220043-m02) DBG | domain multinode-220043-m02 has defined IP address 192.168.39.193 and MAC address 52:54:00:59:ab:db in network mk-multinode-220043
	I0731 20:56:22.735769 1129137 main.go:141] libmachine: (multinode-220043-m02) Calling .GetSSHPort
	I0731 20:56:22.735946 1129137 main.go:141] libmachine: (multinode-220043-m02) Calling .GetSSHKeyPath
	I0731 20:56:22.736074 1129137 main.go:141] libmachine: (multinode-220043-m02) Calling .GetSSHUsername
	I0731 20:56:22.736238 1129137 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19360-1093692/.minikube/machines/multinode-220043-m02/id_rsa Username:docker}
	I0731 20:56:22.818714 1129137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:56:22.832565 1129137 status.go:257] multinode-220043-m02 status: &{Name:multinode-220043-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:56:22.832615 1129137 status.go:255] checking status of multinode-220043-m03 ...
	I0731 20:56:22.832963 1129137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:56:22.832992 1129137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:56:22.848855 1129137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I0731 20:56:22.849305 1129137 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:56:22.849796 1129137 main.go:141] libmachine: Using API Version  1
	I0731 20:56:22.849821 1129137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:56:22.850131 1129137 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:56:22.850333 1129137 main.go:141] libmachine: (multinode-220043-m03) Calling .GetState
	I0731 20:56:22.851708 1129137 status.go:330] multinode-220043-m03 host status = "Stopped" (err=<nil>)
	I0731 20:56:22.851726 1129137 status.go:343] host is not running, skipping remaining checks
	I0731 20:56:22.851734 1129137 status.go:257] multinode-220043-m03 status: &{Name:multinode-220043-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 node start m03 -v=7 --alsologtostderr
E0731 20:57:00.018855 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-220043 node start m03 -v=7 --alsologtostderr: (38.263829246s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-220043 node delete m03: (1.811369881s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-220043 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 21:07:00.019160 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-220043 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.642571254s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-220043 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-220043
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-220043-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-220043-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.955271ms)

                                                
                                                
-- stdout --
	* [multinode-220043-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-220043-m02' is duplicated with machine name 'multinode-220043-m02' in profile 'multinode-220043'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-220043-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-220043-m03 --driver=kvm2  --container-runtime=crio: (44.266748007s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-220043
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-220043: exit status 80 (210.220384ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-220043 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-220043-m03 already exists in multinode-220043-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-220043-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-220043-m03: (1.013700936s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.60s)

                                                
                                    
x
+
TestScheduledStopUnix (113.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-499756 --memory=2048 --driver=kvm2  --container-runtime=crio
E0731 21:11:43.064556 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 21:12:00.018421 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-499756 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.986358485s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-499756 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-499756 -n scheduled-stop-499756
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-499756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-499756 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-499756 -n scheduled-stop-499756
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-499756
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-499756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-499756
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-499756: exit status 7 (67.060066ms)

                                                
                                                
-- stdout --
	scheduled-stop-499756
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-499756 -n scheduled-stop-499756
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-499756 -n scheduled-stop-499756: exit status 7 (65.848015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-499756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-499756
--- PASS: TestScheduledStopUnix (113.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (218.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3120024607 start -p running-upgrade-084648 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0731 21:14:31.357848 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3120024607 start -p running-upgrade-084648 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m1.185211136s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-084648 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-084648 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.470731572s)
helpers_test.go:175: Cleaning up "running-upgrade-084648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-084648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-084648: (2.199857922s)
--- PASS: TestRunningBinaryUpgrade (218.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-081034 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-081034 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (89.86373ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-081034] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-081034 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-081034 --driver=kvm2  --container-runtime=crio: (1m31.795436039s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-081034 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.05s)

                                                
                                    
x
+
TestPause/serial/Start (98.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-355751 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-355751 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.38790301s)
--- PASS: TestPause/serial/Start (98.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-605794 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-605794 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.047215ms)

                                                
                                                
-- stdout --
	* [false-605794] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19360
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:14:46.946367 1137678 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:14:46.946473 1137678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:14:46.946481 1137678 out.go:304] Setting ErrFile to fd 2...
	I0731 21:14:46.946485 1137678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:14:46.946654 1137678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19360-1093692/.minikube/bin
	I0731 21:14:46.947235 1137678 out.go:298] Setting JSON to false
	I0731 21:14:46.948348 1137678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":17838,"bootTime":1722442649,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:14:46.948416 1137678 start.go:139] virtualization: kvm guest
	I0731 21:14:46.950455 1137678 out.go:177] * [false-605794] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:14:46.951654 1137678 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 21:14:46.951686 1137678 notify.go:220] Checking for updates...
	I0731 21:14:46.954122 1137678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:14:46.955408 1137678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19360-1093692/kubeconfig
	I0731 21:14:46.956568 1137678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19360-1093692/.minikube
	I0731 21:14:46.957749 1137678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:14:46.959021 1137678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:14:46.960613 1137678 config.go:182] Loaded profile config "NoKubernetes-081034": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:14:46.960738 1137678 config.go:182] Loaded profile config "pause-355751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:14:46.960837 1137678 config.go:182] Loaded profile config "running-upgrade-084648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 21:14:46.960954 1137678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:14:46.998292 1137678 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:14:46.999478 1137678 start.go:297] selected driver: kvm2
	I0731 21:14:46.999509 1137678 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:14:46.999525 1137678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:14:47.002200 1137678 out.go:177] 
	W0731 21:14:47.003680 1137678 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0731 21:14:47.004909 1137678 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-605794 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-605794" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-605794

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-605794"

                                                
                                                
----------------------- debugLogs end: false-605794 [took: 3.279427628s] --------------------------------
helpers_test.go:175: Cleaning up "false-605794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-605794
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (66.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-081034 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-081034 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m4.851367269s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-081034 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-081034 status -o json: exit status 2 (241.285785ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-081034","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-081034
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-081034: (1.148333161s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (66.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-081034 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-081034 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.411036521s)
--- PASS: TestNoKubernetes/serial/Start (24.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-081034 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-081034 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.114024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.801448385s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0731 21:17:00.018338 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.467041224s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-081034
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-081034: (1.399682948s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-081034 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-081034 --driver=kvm2  --container-runtime=crio: (23.743077441s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-081034 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-081034 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.945343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (130.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.856220311 start -p stopped-upgrade-140201 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.856220311 start -p stopped-upgrade-140201 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m28.229855321s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.856220311 -p stopped-upgrade-140201 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.856220311 -p stopped-upgrade-140201 stop: (2.154531129s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-140201 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0731 21:19:14.404382 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 21:19:31.357328 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-140201 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.559204281s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (130.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-140201
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-018891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-018891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m5.816281561s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-018891 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [67c16d33-f140-4fe1-addb-121b6e20e72b] Pending
helpers_test.go:344: "busybox" [67c16d33-f140-4fe1-addb-121b6e20e72b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [67c16d33-f140-4fe1-addb-121b6e20e72b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.006126195s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-018891 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-563652 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-563652 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m5.3714791s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-018891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-018891 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-563652 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [31f0f93d-1452-486b-a9b8-0d8c76c45a16] Pending
helpers_test.go:344: "busybox" [31f0f93d-1452-486b-a9b8-0d8c76c45a16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [31f0f93d-1452-486b-a9b8-0d8c76c45a16] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003983551s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-563652 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-563652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-563652 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-755535 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-755535 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m10.459148084s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (661.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-018891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 21:24:31.357355 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-018891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m1.583929766s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-018891 -n no-preload-018891
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (661.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-755535 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [873ec90f-0bdc-41a1-be49-45116eb0ccab] Pending
helpers_test.go:344: "busybox" [873ec90f-0bdc-41a1-be49-45116eb0ccab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [873ec90f-0bdc-41a1-be49-45116eb0ccab] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.0039964s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-755535 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-755535 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-755535 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (552.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-563652 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-563652 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m12.587457794s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563652 -n embed-certs-563652
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (552.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-275462 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-275462 --alsologtostderr -v=3: (6.288820556s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275462 -n old-k8s-version-275462: exit status 7 (64.938621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-275462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (410.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-755535 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 21:28:23.064991 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
E0731 21:29:31.357744 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 21:32:00.019007 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-755535 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (6m50.233363556s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (410.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-308216 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-308216 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (46.897116906s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-308216 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-308216 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-308216 --alsologtostderr -v=3: (10.359631213s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-308216 -n newest-cni-308216
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-308216 -n newest-cni-308216: exit status 7 (65.420202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-308216 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-308216 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 21:49:31.357707 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-308216 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (34.692759796s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-308216 -n newest-cni-308216
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-308216 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-308216 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-308216 -n newest-cni-308216
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-308216 -n newest-cni-308216: exit status 2 (272.408979ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-308216 -n newest-cni-308216
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-308216 -n newest-cni-308216: exit status 2 (263.79125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-308216 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-308216 -n newest-cni-308216
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-308216 -n newest-cni-308216
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m1.094291813s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (115.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m55.882714448s)
--- PASS: TestNetworkPlugins/group/calico/Start (115.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-605794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-605794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dn69l" [860e2509-2162-4fff-b560-7650971291e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 21:51:01.143668 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:01.149062 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:01.159841 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:01.180195 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:01.220577 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:01.300979 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:01.461696 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:01.782394 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:02.422736 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
E0731 21:51:03.703950 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-dn69l" [860e2509-2162-4fff-b560-7650971291e7] Running
E0731 21:51:06.265016 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00403274s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-605794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m19.928489074s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0731 21:51:11.385609 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0731 21:51:42.107231 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m24.639140115s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vslzw" [eee7a146-0793-4343-9815-9094f9562ba1] Running
E0731 21:52:00.019281 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/addons-877061/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007490514s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-605794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-605794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rsh2d" [4327e502-16a2-4ea3-92ff-b660a1c6093d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rsh2d" [4327e502-16a2-4ea3-92ff-b660a1c6093d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003741564s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-605794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-755535 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-755535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535: exit status 2 (275.347795ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535: exit status 2 (267.922803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-755535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-755535 -n default-k8s-diff-port-755535
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.72s)
E0731 21:53:44.988969 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/no-preload-018891/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m18.315211428s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-605794 "pgrep -a kubelet"
E0731 21:52:31.452682 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
E0731 21:52:31.457978 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
E0731 21:52:31.468333 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
E0731 21:52:31.488684 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
E0731 21:52:31.528985 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
E0731 21:52:31.609355 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-605794 replace --force -f testdata/netcat-deployment.yaml
E0731 21:52:31.770318 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b2m9v" [04f3ef11-ca25-4491-b502-faa06e44abbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 21:52:32.091467 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
E0731 21:52:32.731655 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
E0731 21:52:34.011796 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
E0731 21:52:34.406169 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
E0731 21:52:36.572168 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-b2m9v" [04f3ef11-ca25-4491-b502-faa06e44abbd] Running
E0731 21:52:41.692722 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00318603s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m32.587737498s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-605794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xndg5" [5be7e912-3283-4aba-971c-3656798b64b1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006417863s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-605794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-605794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bvt26" [350860e4-867d-467a-b763-44ab936f8cde] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bvt26" [350860e4-867d-467a-b763-44ab936f8cde] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.003838689s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-605794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m24.331894675s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-605794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-266xz" [4e19d508-1dad-424d-8eef-0449b2d478fe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005013684s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-605794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-605794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tgck8" [6f929927-b568-4aab-8d5b-66f539603b63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 21:53:53.374881 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/old-k8s-version-275462/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-tgck8" [6f929927-b568-4aab-8d5b-66f539603b63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005260062s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-605794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-605794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-605794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-97zwl" [14496ca2-fd05-4dc4-a131-980eb97dec6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-97zwl" [14496ca2-fd05-4dc4-a131-980eb97dec6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004255498s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-605794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-605794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-605794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nmwrs" [21291892-841e-4227-8674-cf8e8ec03f68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nmwrs" [21291892-841e-4227-8674-cf8e8ec03f68] Running
E0731 21:54:31.357284 1100976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19360-1093692/.minikube/profiles/functional-110390/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004627154s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-605794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-605794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (35/323)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-318420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-318420
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-605794 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-605794" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-605794

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-605794"

                                                
                                                
----------------------- debugLogs end: kubenet-605794 [took: 2.732139195s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-605794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-605794
--- SKIP: TestNetworkPlugins/group/kubenet (2.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-605794 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-605794" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-605794

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-605794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-605794"

                                                
                                                
----------------------- debugLogs end: cilium-605794 [took: 6.033079126s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-605794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-605794
--- SKIP: TestNetworkPlugins/group/cilium (6.21s)

                                                
                                    
Copied to clipboard